Contents
- Target 2: Deterministic Ghost Scaling \(\gamma(\alpha_t)\)
- 2.1 Model and topological escape definition
- 2.2 Exact scaling symmetry and the weak-memory universality class
- 2.3 Prefactor estimate (constant-flux / ramp approximation)
- 2.4 Why earlier simulations reported \(\gamma_{\mathrm{meas}} < \gamma_{\mathrm{weak}}\)
- 2.5 Numerical validation and the strong-memory edge
- 2.6 Data collapse diagnostic (for figures)
- Target 3: Stochastic Escape and Fractional Survival Statistics
- Target 1: Fractional Logistic Universality (Discrete Map Operator)
- 4.1 Canonical discrete fractional map (memory on increments)
- 4.2 α=1 validation and classical scaling
- 4.3 Methodological boundary: cascade detection for \(\alpha_t<1\)
- 4.4 Diagnostic roadmap for the fractional cascade
- 4.5 Mechanistic sketch: RG fixed-point dissolution
- 4.6 Relation to ghost scaling and bifurcation class
- 4.7 Possible future diagnostic: Lyapunov tracking
- Editorial summary and integration
Target 2: Deterministic Ghost Scaling \(\gamma(\alpha_t)\)
This section combines the exact scaling derivation with the final ABM–PECE
unit tests for the saddle-node “ghost.” Together they show that the
deterministic exponent
\[
\gamma(\alpha_t) = \frac{1}{2\alpha_t}
\]
is realized to high numerical accuracy for \(\alpha_t \gtrsim 0.8\).
Earlier deviations are now interpreted as pre-asymptotic effects
or as possible signs of a separate strong-memory regime, rather than as
a settled alternative universality class.
2.1 Model and topological escape definition
We analyze the fractional saddle-node normal form with Caputo derivative:
\[
D_t^{\alpha_t} x(t)
= \mu + x(t)^2,\qquad
x(0)=0,\qquad
0<\alpha_t\le 1,\qquad
\mu\to 0^+.
\]
To avoid “metric threshold” pathologies, escape is defined relative to
the natural bottleneck scale \(\sqrt{\mu}\). Let \(\kappa\gg 1\) be fixed.
The topological escape time is
\[
\tau_{\mathrm{esc}}(\mu;\alpha_t)
= \inf\left\{ t>0\ :\ x(t)\ge \kappa\sqrt{\mu}\right\}.
\]
Equivalently, if \(\epsilon\) is a fixed metric threshold, introducing
\(\Theta=\epsilon/\sqrt{\mu}\) makes clear that meaningful scaling requires
\(\Theta\gg 1\).
2.2 Exact scaling symmetry and the weak-memory universality class
The clean derivation uses an exact scaling reduction. Set
\[
x(t) = \sqrt{\mu}\,y(s),\qquad
s = \mu^{\frac{1}{2\alpha_t}}\,t.
\]
Caputo derivatives obey the scaling rule
\[
D_t^{\alpha_t}[y(\lambda t)]
= \lambda^{\alpha_t}\,(D_s^{\alpha_t}y)(s),
\quad s=\lambda t.
\]
Using this with \(\lambda=\mu^{1/(2\alpha_t)}\), we obtain
\[
\begin{aligned}
D_t^{\alpha_t}x(t)
&= D_t^{\alpha_t}\!\big(\sqrt{\mu}\,y(\mu^{\tfrac{1}{2\alpha_t}}t)\big) \\
&= \sqrt{\mu}\,\big(\mu^{\tfrac{1}{2\alpha_t}}\big)^{\alpha_t}\,D_s^{\alpha_t}y(s)
\quad\text{(scaling rule)} \\
&= \sqrt{\mu}\,\mu^{\tfrac{1}{2}}\,D_s^{\alpha_t}y(s) \\
&= \mu\,D_s^{\alpha_t}y(s).
\end{aligned}
\]
Substituting this into the equation and dividing by \(\mu\) yields the
rescaled, \(\mu\)-free ghost equation
\[
D_s^{\alpha_t}y(s) = 1 + y(s)^2,\qquad y(0)=0.
\]
The escape condition becomes
\[
y(s_{\mathrm{esc}}) = \kappa
\quad\Rightarrow\quad
s_{\mathrm{esc}}(\alpha_t,\kappa)
= \inf\{s : y(s)\ge\kappa\},
\]
which depends only on \(\alpha_t\) and \(\kappa\), not on \(\mu\).
Transforming back to physical time, we obtain
\[
\tau_{\mathrm{esc}}(\mu;\alpha_t)
= \mu^{-\tfrac{1}{2\alpha_t}}\,s_{\mathrm{esc}}(\alpha_t,\kappa),
\]
so the weak-memory universality class is
\[
\boxed{\gamma_{\mathrm{weak}}(\alpha_t)=\frac{1}{2\alpha_t}.}
\]
2.3 Prefactor estimate (constant-flux / ramp approximation)
Near a saddle-node bottleneck the vector field is approximately constant:
in the classical case one uses the constant-flux approximation
\(\dot{x}\approx \mu\). The fractional analogue is
\[
D_s^{\alpha_t}y(s)\approx 1,
\]
which states that the fractional “flux” through the bottleneck is roughly
constant: the system spends most of its time in a region where the
right-hand side is nearly flat. Integrating the Caputo derivative of a
constant gives
\[
y(s)\approx \frac{s^{\alpha_t}}{\Gamma(\alpha_t+1)}.
\]
Setting \(y(s_{\mathrm{esc}})\approx \kappa\) yields
\[
s_{\mathrm{esc}}(\alpha_t,\kappa)\approx
\big(\kappa\,\Gamma(\alpha_t+1)\big)^{1/\alpha_t},
\]
and thus the asymptotic law
\[
\tau_{\mathrm{esc}}(\mu;\alpha_t)
\;\sim\;
\big(\kappa\Gamma(\alpha_t+1)\big)^{1/\alpha_t}\,
\mu^{-1/(2\alpha_t)}.
\]
This makes two points explicit:
- \(\gamma(\alpha_t)\) is universal in the weak-memory class,
- the prefactor is strongly \(\alpha_t\)-dependent,
scaling like \(\big(\kappa\Gamma(\alpha_t+1)\big)^{1/\alpha_t}\).
2.4 Why earlier simulations reported \(\gamma_{\mathrm{meas}} < \gamma_{\mathrm{weak}}\)
Early L1-based simulations (fixed metric threshold, finite cutoff \(M\),
relatively coarse \(h\)) produced exponents such as
\(\gamma_{\mathrm{meas}}(0.7)\approx 0.43\) vs
\(\gamma_{\mathrm{weak}}(0.7)=1/(2\cdot 0.7)\approx 0.71\).
With the ABM–PECE unit test in place, we reinterpret those values as
pre-asymptotic effective exponents caused by:
-
Metric escape contamination:
a fixed escape level \(x=\epsilon\) mixes a \(\mu\)-dependent bottleneck
time with a \(\mu\)-independent transit time for \(x^2\gg\mu\), which
flattens the slope in \(\log\tau\) vs \(\log\mu\). -
Finite-memory truncation:
cutting the history at \(M\) underestimates the tail of the Caputo kernel
and can understate the effective drag, making escape too fast. -
Baseline inconsistency at \(\alpha_t=1\):
using the limiting \(\kappa\to\infty\) formula
\(\tau\sim\frac{\pi}{2\sqrt{\mu}}\) for \(\alpha_t=1\) while using a
finite threshold in the fractional cases biases cross-\(\alpha_t\)
comparisons.
The final unit test fixes this by:
- Using a strict topological escape \(x(t)\ge\kappa\sqrt{\mu}\) with \(\kappa=10\),
- Using an ABM–PECE solver with the correct endpoint coefficient \(a_{0,n+1}\),
- Checking convergence in \(M\), \(h\), and \(T_{\max}\).
2.5 Numerical validation and the strong-memory edge
The final \(\alpha\)-sweep of the ghost unit test produces:
alpha gamma_fit gamma_theory rel_error_pct 0.70 0.7997 0.7143 +11.96 0.80 0.6318 0.6250 +1.09 0.90 0.5555 0.5556 -0.01 0.95 0.5261 0.5263 -0.04 1.00 0.5000 0.5000 ~0.00
For \(\alpha_t\ge 0.8\), the measured exponent agrees with
\(\gamma_{\mathrm{weak}}(\alpha_t)=1/(2\alpha_t)\) at the percent level.
At \(\alpha_t=0.7\) the fit overshoots the theoretical value by about
\(12\%\). Importantly, the sign of this deviation is not what one would
naively expect from finite-memory truncation alone (which tends to
reduce drag and thus lower γ). This makes the
\(\alpha_t=0.7\) point genuinely ambiguous:
-
it could still be a pre-asymptotic effect (ghost window too narrow,
finite \(M\), finite \(h\) and limited μ-range conspiring to bias the fit), -
or it could be the onset of a distinct strong-memory regime where the
simple \(\mu^{-1/(2\alpha_t)}\) scaling breaks down.
Monograph stance for Target 2:
we adopt \(\gamma(\alpha_t)=1/(2\alpha_t)\) as the canonical weak-memory
exponent, strongly supported for \(\alpha_t\ge 0.8\), and we flag the
\(\alpha_t=0.7\) deviation as an open strong-memory question rather
than forcing it into the “numerical artifact” box.
2.6 Data collapse diagnostic (for figures)
Beyond exponent fits, a useful visual diagnostic is a
data collapse plot. If
\(\tau_{\mathrm{esc}}(\mu;\alpha_t)\sim C(\alpha_t,\kappa)\,\mu^{-1/(2\alpha_t)}\),
then the rescaled quantity
\[
Y(\mu;\alpha_t)
= \tau_{\mathrm{esc}}(\mu;\alpha_t)\,\mu^{1/(2\alpha_t)}
\]
should be approximately constant across the \(\mu\)-range where the scaling
holds. Plotting \(Y\) vs \(\mu\) for fixed \(\alpha_t\) gives a flat line
in the asymptotic regime and makes any drift at large \(\mu\) (pre-asymptotic
contamination or true strong-memory breakdown) immediately visible.
Target 3: Stochastic Escape and Fractional Survival Statistics
This section combines the survival-analysis framework (Kaplan–Meier and
\(\lambda\) extraction) with unit tests at \(\alpha_t=1\) and
\(\alpha_t=0.95\). At \(\alpha_t=1\), the procedure reproduces classical
Kramers scaling. At \(\alpha_t=0.95\), both the classical barrier exponent
\(p=3/2\) (Hypothesis A) and the fractional exponent
\(p=3/(2\alpha_t)\approx 1.58\) (Hypothesis B) yield high-quality fits.
We emphasize that the fractional Langevin model
\[
D_t^{\alpha_t}x(t)=\mu + x(t)^2 + \sigma\,\eta(t)
\]
is heuristic: it is not derived from a Hamiltonian with a
fluctuation–dissipation theorem in the strict generalized Langevin sense.
The “memory as friction vs memory as geometry” language below should be
read as a phenomenological interpretation of the observed scaling,
not as a microscopic derivation.
3.1 Model (barrier regime) and why MFPT fails
We consider the heuristic fractional Langevin normal form:
\[
D_t^{\alpha_t}x(t)
= \mu + x(t)^2 + \sigma\,\eta(t),\qquad
\mu<0,
\]
where \(\eta(t)\) is idealized white noise. In the classical case \(\alpha_t=1\),
the drift corresponds to the potential
\[
V(x)= -\mu x – \frac{x^3}{3},
\]
with barrier height
\[
\Delta V = \frac{4}{3}|\mu|^{3/2}.
\]
For \(\alpha_t<1\), the first-passage time distribution becomes heavy-tailed
(often Mittag–Leffler-like), so the mean first-passage time (MFPT) may be
extremely large or effectively undefined within finite simulation windows.
When many paths hit \(T_{\max}\) without escaping, the naive sample mean
of \(\tau_i\) is severely biased, and regressions of \(\ln\bar{\tau}\)
vs \(|\mu|^p/\sigma^2\) become statistically invalid.
3.2 Canonical observable: survival function \(S(t)\)
The robust observable is the survival function
\[
S(t)=\Pr(\tau>t).
\]
For fractional kinetics, a natural parametric form is a Mittag–Leffler
survival:
\[
S(t)\approx E_{\alpha_t}\!\left(-\lambda t^{\alpha_t}\right),\qquad
E_{\alpha}(z)=\sum_{n=0}^{\infty}\frac{z^n}{\Gamma(\alpha n+1)}.
\]
Short-time expansion:
\(S(t)\approx 1-\lambda t^{\alpha_t}/\Gamma(\alpha_t+1)+\cdots\).
Long-time asymptotics: \(S(t)\sim t^{-\alpha_t}\). In both regimes
the decay is non-exponential and no constant rate exists in the
usual Poissonian sense.
3.3 Kaplan–Meier estimator and censoring
Given \(N\) trajectories with observed times \(\tau_i\) and event flags
\(\delta_i\in\{0,1\}\) (\(\delta_i=1\) escaped, \(\delta_i=0\) censored
at \(T_{\max}\)), the Kaplan–Meier estimator is
\[
\widehat{S}(t)
= \prod_{t_j\le t}
\left(1 – \frac{d_j}{n_j}\right),
\]
where \(t_j\) are distinct event times, \(d_j\) escapes at \(t_j\),
and \(n_j\) the number at risk just before \(t_j\). This estimator is
specifically designed to handle right-censored data: each censored path
contributes the information that it survived at least until \(T_{\max}\),
preventing the downward bias that would arise from simply discarding or
averaging over censored runs.
As long as censoring (hitting \(T_{\max}\)) is independent of the
instantaneous hazard of escape — which is the case for our fixed-time
truncation — Kaplan–Meier remains an unbiased estimator of the survival
curve \(S(t)\), even for fairly high censoring fractions.
We always report the censoring fraction
\[
f_c = \frac{\#\{\delta_i=0\}}{N}.
\]
When \(f_c\gtrsim 0.3\), any inferred rate scale \(\lambda\) from
the tail should be regarded as a lower bound for that \(\mu\).
3.4 Extracting a rate scale \(\lambda\) from survival
To compare barriers across \((\mu,\sigma,\alpha_t)\), we define a rate
scale \(\lambda(\mu,\sigma,\alpha_t)\) by fitting
\[
y(t) = -\ln\widehat{S}(t)
\approx \frac{\lambda}{\Gamma(\alpha_t+1)}\,t^{\alpha_t}
\]
on the mid-range of the survival curve (e.g. \(0.2\lesssim S\lesssim 0.8\)).
Equivalently, one can perform a log–log regression
\(\log y \approx \log\lambda – \log\Gamma(\alpha_t+1)+\alpha_t\log t\).
α=1 baseline (classical Kramers):
in the final unit test for \(\alpha_t=1\) with \(T_{\max}=5\),
\(N_{\mathrm{traj}}=4000\), and \(\sigma=0.05\), plotting
\(\ln\hat{\lambda}\) against
the classical Kramers variable \(|\mu|^{3/2}/\sigma^2\) gives an
almost perfectly linear relation (fit on the first three barrier points)
with \(R^2\approx 0.96\), despite censoring up to \(\sim 0.66\).
This anchors the survival-based methodology to the known exponent
\(p(1)=3/2\) and validates \(\hat{\lambda}\) as the appropriate observable.
3.5 Barrier scaling hypotheses and the “friction vs geometry” view
We consider two competing interpretations for how memory and noise interact
with the barrier:
Hypothesis A — Memory as friction (GLE-like view)
The potential landscape \(V(x)\) is static, with barrier
\(\Delta V\propto|\mu|^{3/2}\) as in the classical Kramers problem.
The fractional operator enters as a generalized friction:
memory modifies the temporal kernel of dissipation but not the geometry
of the energy landscape.
\[
\lambda(\mu,\sigma,\alpha_t)
\propto
\exp\!\left(-\frac{C\,|\mu|^{3/2}}{\sigma^2}\right),
\qquad
S(t)\approx E_{\alpha_t}\!\left(-\lambda t^{\alpha_t}\right).
\]
In this “friction view,” non-locality is kinetic rather than
energetic: memory slows the rate (\(\lambda\) and characteristic times)
but does not change the work \(\Delta V\) required to escape.
Hypothesis B — Memory as geometry (fractional barrier exponent)
Here the fractional derivative is taken to alter the effective
energy landscape itself, so that the static barrier exponent in
\(|\mu|\) becomes \(\alpha_t\)-dependent:
\[
\log\lambda
\sim
-\frac{C\,|\mu|^{p(\alpha_t)}}{\sigma^2},\qquad
p(\alpha_t)\approx\frac{3}{2\alpha_t}.
\]
This “geometry view” unifies the deterministic exponent
\(\gamma(\alpha_t)=1/(2\alpha_t)\) and the stochastic exponent
\(p(\alpha_t)=3/(2\alpha_t)\) by treating the fractional time
stretch as renormalizing both the drift and the effective barrier.
Fractional test at \(\alpha_t=0.95\): what we can and cannot say
For \(\alpha_t=0.95\), with \(T_{\max}=5\), \(N_{\mathrm{traj}}=4000\)
and \(\sigma=0.05\), we obtain survival-based rate estimates
\(\hat{\lambda}(\mu)\) with censoring fractions ranging from
\(\sim 0.63\) at the strongest barrier to \(\sim 0.10\) at the weakest.
Fitting \(\ln\hat{\lambda}\) vs \(|\mu|^p/\sigma^2\) using the first
three barrier points yields:
-
Hyp A (p=3/2):
slope \(\approx -3.15\), \(R^2\approx 0.94\). -
Hyp B (p=3/(2\alpha_t)\approx 1.5789):
slope \(\approx -4.47\), \(R^2\approx 0.93\).
Both hypotheses produce high-quality Arrhenius fits
(\(R^2\approx 0.93\text{–}0.94\)). The small difference
\(\Delta R^2\approx 0.01\) in favor of Hypothesis A is
suggestive but not statistically decisive at
this sample size and censoring level. It is compatible with the
friction view (memory primarily kinetic) but does not yet rule out
the geometry view.
The correct interpretation at \(\alpha_t=0.95\) is therefore:
“in this weak-memory regime, the data are consistent with a static
Boltzmann barrier and fractional kinetics; we cannot conclusively
discriminate between the proposed exponents using this single
α value.”
3.6 α-sweep as a decisive test
The exponents for Hypotheses A and B are very close at
\(\alpha_t\approx 1\):
\(p_A=1.5\) vs \(p_B\approx 1.58\) at \(\alpha_t=0.95\).
To decisively distinguish the “friction” and “geometry” views,
it is necessary to go to smaller \(\alpha_t\) where the exponents
separate strongly. For example, at \(\alpha_t=0.7\),
\[
p_A = \frac{3}{2}=1.5,\qquad
p_B = \frac{3}{2\alpha_t}\approx\frac{3}{1.4}\approx 2.14,
\]
a gap large enough to resolve even with noise and censoring.
Essential next step for Target 3: perform an α-sweep
(e.g. \(\alpha_t\in\{0.6,0.7,0.8,0.9,0.95,1.0\}\)) at fixed \(\sigma\),
extract \(\hat{\lambda}(\mu,\alpha_t)\) via Kaplan–Meier, and compute
\(R^2_A(\alpha_t)\) and \(R^2_B(\alpha_t)\) for the two exponents
\(p_A=3/2\) and \(p_B=3/(2\alpha_t)\). Then examine
\[
\Delta R^2(\alpha_t)
= R^2_A(\alpha_t) – R^2_B(\alpha_t).
\]
-
If \(\Delta R^2(\alpha_t)\gtrsim 0\) for all \(\alpha_t\), the
friction view wins: fractional memory acts as
generalized friction, slowing kinetics but leaving
\(\Delta V\propto|\mu|^{3/2}\) intact. -
If \(\Delta R^2(\alpha_t)\) becomes clearly negative at low
\(\alpha_t\) (e.g. \(\alpha_t\le 0.7\)), the
geometry view gains support: strong memory
deforms the effective barrier exponent and produces a genuine
universality fracture in the stochastic sector.
In this sense, the current \(\alpha_t=0.95\) result is the “edge” of the
crossover: it demonstrates that fractional kinetics alone can account for
the data near \(\alpha_t=1\), and it motivates (but does not yet complete)
the full α-sweep required to decide between friction and geometry at strong
memory.
Target 1: Fractional Logistic Universality (Discrete Map Operator)
Target 1 focuses on a discrete fractional map rather than a
time-discretized flow: memory is introduced directly at the level of the
map iterates. This is crucial, because discrete maps can have different
universality classes than continuous-time systems, and treating a map as
a badly-sampled flow risks destroying the mechanism that generates chaos.
4.1 Canonical discrete fractional map (memory on increments)
Let \(g(x)=r x(1-x)\) and define the increment
\[
\Delta x_k = g(x_k)-x_k.
\]
Define the discrete Caputo‑increment map on \(\mathbb{N}_0\) by
\[
x_n
= x_0
+ \frac{1}{\Gamma(\alpha_t)}
\sum_{k=0}^{n-1}
\frac{\Gamma(n-1-k+\alpha_t)}{\Gamma(n-k)}\,
\big[g(x_k)-x_k\big],\qquad 0<\alpha_t\le 1.
\]
The ratio
\(\Gamma(n-1-k+\alpha_t)/\Gamma(n-k)\) acts as a discrete power-law
memory kernel in the iterate index, analogous to \(t^{-\alpha_t}\) in
continuous time. This is a map-first construction: the nonlinear
fold \(g(x)\) that drives the Feigenbaum cascade remains unchanged, and
memory acts on the increment sequence \(\{\Delta x_k\}\), not on a
discretized ODE.
4.2 α=1 validation and classical scaling
For \(\alpha_t=1\), the kernel factor is 1 and the sum telescopes so that
\[
x_{n+1}-x_n=g(x_n)-x_n
\quad\Rightarrow\quad
x_{n+1}=g(x_n),
\]
exactly recovering the classical logistic map. The unit test imposes this
in three ways:
- α=1 recovery on random (r, x0) in a stable window with max error ~1e-14,
- M-convergence of tails at α≈1 (Caputo memory vs cutoff),
- Correct detection of classical Feigenbaum bifurcation points and δ_k.
This validates the discrete operator and period-detection machinery
in the classical case.
4.3 Methodological boundary: cascade detection for \(\alpha_t<1\)
When we naively reuse the α=1 detection parameters (tolerances, transient
length N_trans, memory cutoff M) for \(\alpha_t<1\), the current pipeline
typically finds no clean period-doubling cascade at all: the
classifier either reports “fixed” or “chaotic” behavior, with zero or very
few resolved 2^k windows.
We do not interpret this as evidence that the cascade
truly disappears for \(\alpha_t<1\). Rather, we treat it as a
detection failure caused by:
-
Slower convergence:
memory drag makes orbits converge more slowly; N_trans calibrated for
α=1 may be far too small at α<1. -
Over-tight tolerances:
period-recurrence tolerances tuned for a fast-mixing classical map may
be too strict when memory smears the attractor. -
Shifted bifurcation windows:
period-doubling thresholds in r can shift under memory; scanning only
the classical r-range may miss the deformed cascade.
A provisional, honest boundary statement is:
\[
\boxed{
\text{With current detection settings, no cascade is resolved for }
\alpha_t<1;\ \text{this is treated as a methodological limitation,}
\text{ not a confirmed physical cutoff.}
}
\]
4.4 Diagnostic roadmap for the fractional cascade
To resolve whether the cascade truly “fractures” or is simply
hard to detect, we adopt the following diagnostic protocol:
-
Orbit visualization:
for a test case (e.g. \(\alpha_t=0.98,\ r=3.5\)), plot \(x_n\) for
\(n\in[0,5\times 10^4]\). Does the orbit look periodic, quasi-periodic,
or chaotic? -
Attractor sampling:
after a long transient (e.g. N_trans=50,000), collect N_keep=10,000
points, compute a histogram, and estimate how many clusters appear.
One cluster suggests a fixed point, two a period-2 orbit, etc. -
Tolerance sweep:
fix \(\alpha_t=0.98, r=3.5\); vary the recurrence tolerance
(e.g. 1e-4, 1e-5, …, 1e-8) and track when period detection succeeds. -
Transient sweep:
fix \(\alpha_t=0.98, r=3.5\) and a workable tolerance; vary N_trans
(2000, 10,000, 50,000, 200,000) and test whether the detected period
stabilizes. -
r-range exploration:
for \(\alpha_t=0.98\), scan r ∈ [1.0, 4.0] with a coarse step, and plot
simple indicators (e.g. max(x_tail)-min(x_tail)) to locate rough
bifurcation regions before doing a fine δ_k analysis.
Only after these diagnostics converge (M, N_trans, tolerance, r-range)
is it meaningful to ask whether δ(α_t) drifts smoothly (deformation) or
the cascade terminates early (fracture).
4.5 Mechanistic sketch: RG fixed-point dissolution
In the classical logistic map, Feigenbaum universality comes from a
renormalization operator acting by composition:
\[
\mathcal{R}[f](x) = \alpha_F\,f(f(x/\alpha_F)),
\]
which has a hyperbolic fixed point \(f^*\) with a single unstable
eigenvalue \(\delta\approx 4.669\). Memory changes the problem in two
ways:
-
Each “step” now depends on many past iterates, so the effective
evolution from \(x_n\) to \(x_{n+1}\) is non-local in iterate index. -
The RG operator must act not only on a function f but also on a
memory kernel or operator \(M\).
Two qualitatively different scenarios are plausible:
-
Deformation: the RG fixed point \(f^*_{\alpha_t}\)
persists for \(\alpha_t<1\) but its eigenvalues and scaling constants
deform continuously, giving a smooth δ(α_t). -
Dissolution: memory destroys hyperbolicity; the fixed
point becomes marginal or disappears, and the cascade truncates beyond
some \(\alpha_{c,\text{cascade}}\).
Our current numerics are not yet able to distinguish these scenarios.
The honest state is that Target 1 has produced a well-posed discrete
operator and an α=1 validation, but the α<1 cascade behavior
remains an open problem.
4.6 Relation to ghost scaling and bifurcation class
Targets 1 and 2 probe different bifurcation classes under fractional
memory:
-
Target 2 analyzes a saddle-node ghost (rank-2
degeneracy) in a continuous-time Caputo flow. -
Target 1 concerns a cascade of period-doubling bifurcations, which in
the iterated map appear as a sequence of pitchfork-like transitions
in the period-n map.
A unified picture of “fractional universality” would require analyzing
the fractional ghost of pitchfork bifurcations as well, something we
have not yet attempted. For now, Target 2 establishes a solid baseline
for saddle-node ghosts, while Target 1 marks out the open terrain for
pitchfork/cascade behavior under memory.
4.7 Possible future diagnostic: Lyapunov tracking
In classical maps, the onset of chaos is robustly detected by the
maximal Lyapunov exponent
\[
\lambda_{\max} = \lim_{N\to\infty}
\frac{1}{N}\sum_{n=0}^{N-1}\ln|g'(x_n)|.
\]
For fractional maps, the Jacobian structure is more complicated
because the state depends on many past iterates. Nonetheless, constructing
an effective Lyapunov diagnostic that tracks the growth rate of tangent
directions under the memory operator would provide an independent check
on where chaos appears or disappears as \(\alpha_t\) and r vary.
We leave this as a future, more technical extension.
Editorial summary and integration
-
Target 2 (Deterministic Ghost):
exact scaling symmetry plus ABM–PECE unit tests support
\(\gamma(\alpha_t)=1/(2\alpha_t)\) as the canonical weak-memory
exponent, strongly confirmed for \(\alpha_t\ge 0.8\). The
\(\alpha_t=0.7\) deviation is flagged as a possible strong-memory
regime change rather than dismissed as mere numerical noise. -
Target 3 (Stochastic Escape):
survival-based \(\lambda\) extraction via Kaplan–Meier is validated
at \(\alpha_t=1\) (classical Kramers scaling), and at
\(\alpha_t=0.95\) both the “friction” (p=3/2) and “geometry”
(p=3/(2α)) views fit well, with current data slightly favoring
the friction interpretation but not decisively. An α-sweep (down
to at least α≈0.7) is required to see whether memory alters only
kinetics or also the effective barrier exponent. -
Target 1 (Discrete Map):
the discrete Caputo‑increment map passes α=1 recovery and M-convergence
tests and reproduces classical Feigenbaum structure. For α<1 the
present detection pipeline fails to resolve a cascade; this is treated
as a methodological boundary, and a dedicated diagnostic protocol is
outlined to determine whether the cascade is merely hard to see or
genuinely fractures under memory.
Overall, the project has moved from debugging numerics to
mapping the methodological and physical boundaries of
fractional dynamics: how memory alters deterministic scaling, how it
interacts with noise, and how to implement discrete fractional maps
without accidentally changing the underlying physics.