Boundedness vs Unboundedness of A Noise Linked to Tsallis q-Statistics: The Role of The Overdamped Approximation

An apparently ideal way to generate continuous bounded stochastic processes is to consider the stochastically perturbed motion of a point of small mass in an infinite potential well, under overdamped approximation. Here, however, we show that the aforementioned procedure can be fallacious and lead to incorrect results. We indeed provide a counter-example concerning one of the most employed bounded noises, hereafter called Tsallis-Stariolo-Borland (TSB) noise, which admits the well known Tsallis q-statistics as stationary density. In fact, we show that for negative values of the Tsallis parameter q (corresponding to sufficiently large diffusion coefficient of the stochastic force), the motion resulting from the overdamped approximation is unbounded. We then investigate the cause of the failure of Kramers first type approximation, and we formally show that the solutions of the full Newtonian non-approximated model are bounded, following the physical intuition. Finally, we provide a new family of bounded noises extending the TSB noise, the boundedness of whose solutions we formally show.


Introduction
In mathematical biophysics, the influence of extrinsic sources of stochasticity in otherwise deterministic biological systems is very frequently taken into account in an elementary way. Indeed, the deterministic dynamical system (valid in the absence of the above-mentioned sources) is often perturbed by allowing one or more of its parameters to stochastically fluctuate via a white noise or a colored Gaussian perturbation. This approach is very interesting and often allows to make analytical or partiallyanalytical inferences, but it can lead to artifacts, sometimes not perceived by modelers. For example, as stressed in [1], modeling the stochastic fluctuations affecting an anti-tumor therapy by means of a white noise means that the model allows the possibility that the therapy adds tumor cells instead of killing them: a very gross artifact. Indeed, by denoting with Y the tumor size, with r(Y ) its net growth rate, and with θ > 0 the anti-tumor cytotoxic therapy, one gets the following mathematical model: This implies that, in the realizations of the stochastic process, quite frequently the term θY (dt + σdB) is negative: in biological terms, tumor cells would be added! Moreover, this modeling approach also allows an excessive instantaneous growth of the therapy term, which is another equally important , where σ > 0, q is a real number smaller than 3, and A q is the normalization constant. This distribution has some noteworthy properties [2]: i) lim q→1 ρ q (z) = N (0, σ 2 (3 − q)/2); ii) for q > 1, the distribution for z σ is a power law (with diverging second moment if q ∈ (5/3, 3)); iii) for q < 1, Z is bounded, with indeed ρ q (z) equal to zero outside −σ 3−q 1−q , σ 3−q 1−q ; iv) for q → −∞, the distribution of z tends to the uniform distribution in (−1, 1).
Thus, the Tsallis thermostatistical theory is not only able to unify the Gaussian and power-law Levy behaviors, as stressed in [2], but it also describes an important class of bounded stochastic behaviors. Stariolo [5] (see also [6]) investigated the problem of identifying an overdamped stochastic dynamics in the phase space that has the Tsallis q-distribution as equilibrium PDF. For the sake of precision, Stariolo defined a quite general family of SDEs that depend on symmetric unspecified potentials V (x) and leading to a generalization of the Tsallis q-statistics. Namely, the Tsallis q-statistics is obtained in the case of quadratic V (x). For such quadratic potentials, the resulting non-Gaussian bounded process (as well as the case q > 1) has been investigated in a series of influential papers [7][8][9][10][11][12] showing that the departure from the Gaussian PDF in the noise induces remarkable effects in noise-induced transitions and in stochastic resonance [7][8][9]11,13]. This process is sometimes called Tsallis-Borland process [1], although it should be more precisely called the Tsallis-Stariolo-Borland (TSB) process, as we will do in the following. The Stariolo family of SDEs can be put in an even more general framework: the motion of a material point of position y in overdamped regime (thus neglecting the effect of the mass), under the action of a deterministic force F (y) and a stochastic white noise force ξ(t): where η is a positive constant, F (y) is such that and the potential U (y) associated with F (y) is such that lim y→±1 U (y) = +∞.
This suggests that an ideal physical 'recipe' to generate bounded noises is to consider the overdamped motion of a point in a potential well of infinite height, under the perturbation of a stochastic external "white noise" force. This is a particular limit case of the classical problem of statistical physics studied by Kramers in its hugely influential paper published in 1940 [14]. After recalling in Section 2 the basic physical interpretation of the TSB process, we formally prove in Section 3 that the above-mentioned recipe, quite full of appeal, can be fallacious: we show that the non-Gaussian TSB process undergoes a stochastic bifurcation at q = 0, with the process being in fact unbounded for q < 0. In Section 4, however, we prove that the associated paradoxical physical scenario is only apparent, since by taking into account the mass of the point the resulting motion remains bounded. In other words, the unboundedness of the TSB stochastic process for q < 0 is a mathematical artifact caused by the overdamped approximation. Finally, in Section 5, we propose a family of deterministic forces that generalize the TSB noise and that induce bounded motions in the overdamped approximation (as well as in the non-approximate case).

Basic notions
Let us consider a material point P of mass m and position (x, y, z) on which 1D forces act along the x-axis, and that at time t = 0 is not moving in the (y, z) plane. Thus, its subsequent motion will only be along the x-axis with (y(t), z(t)) = (y(0), z(0)) for all t ≥ 0. Suppose the forces acting on P be as follows: i) A linear viscous force : For the sake of notation simplicity we set henceforward: ii) A stochastic white noise force: F s (t) = 2β ξ(t).
iii) A conservative force: where F l (x) and F r (x) are two repulsive forces centered, respectively, at x = −1 and at x = 1, and such that their potentials (denoted as U l (x) and U r (x)) are infinite at the respective centres of repulsion. In other words, we require that: The Newton's equation for the motion of P thus reads as follows: If the mass of the point P is much smaller than the viscous constant rate γ, i.e. in our notation: then one can adopt the first type Kramers approximation by neglecting the contribution of the acceleration. This yields:ẋ = F l (x) + F r (x) + 2βξ(t).
3 Unboundedness of the TSB Noise for q < 0 Equation (17) is a SDE of the formẋ where the drift and the (constant) diffusion are given by: Consider the process x(t) solution of (17), with deterministic initial condition x 0 ∈ I = (−1, 1). Based on the physical model that generates TSBE, the process x apparently satisfies x(t) ∈ I for all times t ≥ 0. In order to carry out a formal investigation of this point, it is convenient to introduce the first exit time of the process from I, denoted as T (x 0 ): The aim is to study the conditions, if any, under which the random time T (x 0 ) is almost surely infinite. Some regularity conditions on the coefficients of the SDE, such as the positiveness of σ 2 and the local integrability of (1 + |ϕ|)/σ 2 , assure that the process leave any compact subinterval [a, b] ⊆ I with probability one [15]. Note that, in the case of TSBE, both conditions are fulfilled. We can therefore consider the almost surely finite random time and study the case a → −1, b → 1. Of course, the process x at the time T := T a,b (x 0 ) will always satisfy either x(T ) = a or x(T ) = b. The probabilities of these two mutually exclusive events can be expressed as follows [15]: where s(x) is the scale function associated with the process x(t). For a process satisfying an SDE as (18), the scale function takes the form where c is any point in I. Notice that, despite being s(x) dependent on the choice of c, the RHSs of (22) are both independent of it. Indeed, the relationship between two scale functions s 1 and s 2 obtained by choosing different constants c is of the type s 2 (x) = α s 1 (x) + β, for some constants α, β.
In the case (19) of TSBE, the scale function where c = 0 thus reads as follows: In particular, notice that the value of q affects whether |s(±1)| is finite or infinite. The following result about the behavior of TSBE near the boundaries in the case q ≥ 0 can now be proved.
Then, for any initial condition x 0 ∈ I = (−1, 1), the solution x(t) remains in I for all times t ≥ 0, with probability one. In terms of the random time Proof. Consider any compact subinterval [a, b] ⊆ I containing x 0 . Up to time T a,b (x 0 ) (eq. (21)), a strong solution of (17) exists and is unique, because of the Lipschitz property of its coefficients in [a, b]. Such solution can also be uniquely extended up to the explosion time T (x 0 ), because the coefficients remain locally Lipschitz on the whole interval I. Now observe that i.e.
given the first identity in (22). Also, from (24), Therefore, by letting b tend to 1 in (26), we get P inf t<T (x0) x(t) ≤ a = 1. Similarly, one has P sup t<T (x0) x(t) ≥ b = 1, since s(−1) = −∞. Since this holds for any a, b ∈ I, we have actually shown that under the hypothesis q ≥ 0.
We can now show that the event A := {T (x 0 ) < ∞} has null probability. Indeed, on A, we can either have that inf t<T (x0) x(t) = −1 or that sup t<T (x0) x(t) = 1. Therefore, given (28), we have The only way this can happen is that P (A) = 0, which is equivalent to P T (x 0 ) = ∞ = 1. This completes the proof.
Theorem 3.1 confirms that, for small values of the noise σ (the ones corresponding to q ∈ [0, 1], see (19)), the solution of TSBE defines a bounded stochastic process, as intuitive. The same tools used in the proof of Theorem 3.1 do not allow however to draw any conclusion about the case q < 0, where |s(±1)| are both finite. To study this case, it is useful to consider the mean exit time of the process from any [a, b] ⊆ I, If we denote by p x0 (t, x) the density of the process at time t > 0, we can write the integrand on the RHS of (30) as Therefore, we can exploit the Fokker-Planck equation for p x0 (x, t) to write down an ordinary differential equation for M a,b (x), view as function of x only. The Ordinary Differential Equation satisfied by M a,b (x) is as follows, cf. [16] for full details: This is subject to the boundary conditions which immediately follows by the definition of M a,b . The analytic solution to the Boundary Value Problem (32) and (33) is available, and reads as follows [15]: where G a,b is the following Green's function and m is the so-called speed measure associated with the process (solution of the SDE with drift ϕ e diffusion σ): Note again that the integrand of (34) does not depend on the particular choice of c which has been made to define the scale function s in (23). In the Tsallis-Borland case, we can therefore choose c = 0 and recover expression (24) for s(x). We can now make a precise statement about the behavior of TSBE under the condition q < 0.
Theorem 3.2. Consider TSBE (17) with q < 0. Then, for any initial condition x 0 ∈ I = (−1, 1), the solution x(t) attains one of the boundaries of I in finite time, with probability one. In other words, Proof. Let us first recall from (24) the expression of the scale function for TSBE as follows: The assumption q < 0 guarantees that both |s(−1)| and |s(+1)| are finite. Therefore, we can extend Green's function G a,b in (35) to a maximal function G −1,1 defined on the whole square [−1, 1] 2 : The average exit time from I of the process x(t) starting at x 0 can then be obtained from (34), by letting a tend to −1 and b tend to 1. We have: Given the continuity of last integrand for all y ∈ [−1, 1], we deduce that E T (x 0 ) < ∞. In particular, this assures that T (x 0 ) is almost surely finite, as it was to be proved.
Theorem 3.2 therefore proves that the Tsallis-Stariolo-Borland process reaches one of the endpoints ±1 in finite time with probability one, if q < 0. Once this happens, the loss of regularity of the coefficients does not guarantee that the solution of the SDE can be extended in a unique way. A more in-depth analysis would indeed show that the uniqueness is lost in this case. There is, in fact, a positive probability that the process develop outside the bounded interval I after attaining one of the boundaries, with a consequent dispersion of the initial mass on the whole real line. We will not provide a formal proof of these facts here. Indeed, we think that the most important and unexpected result has already been shown in Theorem 3.2, and consists in the reachability of the boundaries under the condition q < 0.
In Subsection 3.1 we estimate, as a function of q < 0, the average time that the process needs to attain one of the boundaries ±1. In particular, this will provide the order of the speed at which E T (x 0 ) tends to infinity as q → 0 − .

Average exit time
Formula (40) was used in the proof of Theorem 3.2 to show that the expected exit time of the process from (−1, 1) is finite if q < 0, by a trivial argument of continuity of the integrand. Recall that, in that formula, we put In the following, the functional dependence of E T (x 0 ) on the parameter q will be made explicit. Without loss of generality, we consider the case where the process starts from x 0 = 0. We therefore have: From (38) it immediately follows that s(x) is an odd function. Thus, by (39) and given the expression of s in (38). Now observe that the condition β > 1 implies In particular, from (46), it follows that where Given (47), the average exit time E T (0) in (44) takes the following form: The bounds for C 2 (β) immediately follow by (45) and (48). In terms of the original parameter q = 1 − β < 0, we have In particular, expression (50) allows to deduce the asymptotic behavior of the average exit time in the two cases q → 0 − and q → −∞.
• If the negative parameter q approaches zero, then the average time needed to attain one of the boundaries tends to infinity, linearly in 1/|q|: • The average time needed to attain one of the boundaries can be made arbitrarily small, as long as the parameter q is chosen (negative) large enough: 4 An (apparent) physical paradox, and a really bounded noise Apparently, from a physical point of view, this means that the material point could eventually reach and overcome the boundaries of the infinite-height well, as a pure consequence of sufficiently large stochastic fluctuations. However, this apparent paradox has an easy explanation: the paradox simply comes from the overdamped approximation, which in this particular case led to an unphysical result. As stressed by Hänggi and Jung [17,18], the large friction approximation is equivalent to the condition of validity of the Smoluckowski approximation, which reads as follows [17][18][19]: where D is the diffusion coefficient, and F (x) is the conservative force the point is subject to. In our case, this yields: It is interesting to note that the constraint (54) is violated not only for x close to −1 and 1, as it is intuitive, but also close to 0. We are however going to show that the infinite potential boundaries cannot be overcome in the original full Newton's equation representing the motion of the point P (see (12)) under forces (15) and (16). In this regard, let the initial position be x (0) ∈ (−1, 1) and let it be any initial velocity v (0) ∈ (−∞, ∞). Let us prove that the barriers ±1 are never reached, independently of the value of q (q < 1, of course).
Theorem 4.1. For all q < 1, the solution of (55) with initial condition x 0 ∈ I = (−1, 1) exists globally in time, is unique and is contained in I for all times.
Proof. Before we start, we explain the idea. In the deterministic case, recalled for convenience in Step 1, the global energy is decreasing, because of the viscosity. Since we start from an initial condition with finite energy, the infinite potential barriers cannot be reached, because the energy must remain finite. The extension of this simple argument to the stochastic case requires a proof. Indeed, the additive noise introduces energy in the average, as the energy balance inequality (68) shows. Thus one has to prove that this injected energy is not sufficient to overcome the infinite potential barriers. This is done in Step 2. One detail is however delicate, namely taking expected value of the Itô integral when we only know it is a local martingale (namely we do not know a priori that the integrand is square integrable in all variables).
Step 2 is completed a little bit formally by using the fact that this Itô integral has zero expectation Then, in Step 3, we show how to make it rigorous.
Step 1. Let us rewrite equation (55) in position-velocity coordinates: where we denote by β the positive constant 1 − q.
Let us explain first the idea in the deterministic case β = 0 (the result in this case is well known). The potential energy, kinetic energy, and total energy read as follows: respectively. Notice that Let x (t) be a solution, with x (0) ∈ I, I = (−1, 1), defined on some interval [0, T 0 ) (a local in time unique solution exists since the coefficients of the equation are locally Lipschitz continuous on I). By classical arguments of analysis one can consider the maximal interval of time [0, T max ) where the solution exists unique and belongs to I. We then have two possibilities for T max : either T max = +∞, or T max < ∞ and lim t→Tmax x (t) is either 1 or -1. In order to prove that T max = +∞, let us show that, for all t ∈ [0, T max ), This implies T max = +∞, because under inequality (62) lim t→Tmax x (t) cannot be equal to 1 or -1.
we have E (t) ≤ E (0) < ∞ and in particular, The property The inequality |x (t) + 1| ≥ exp (−E (0) − log 2) is similar. We have proved the claim of the theorem in the deterministic case.
Step 2. Let us now give the proof in the stochastic case, β = 0. As far as the solution has the property x (t) ∈ (−1, 1), it lives in a region of (x, v) space where the coefficients of the equation are locally Lipschitz continuous. Hence a unique maximal solution exists, maximal with the property x (t) ∈ (−1, 1), on a random time interval [0, T max ). We have to prove that P (T max = +∞) = 1. When T max < ∞, one has lim t→Tmax x (t) = ±1. Let (x, v) be the maximal solution, on [0, T max ). By Itô formula, on [0, T max ), Representing ξ as the derivative of Brownian motion, ξ =Ḃ, we get In particular, always on [0, T max ), the two following inequalities hold: From (70) we deduce (notice that t ∧ T max ≤ t) since the expectation of the Itô integral is zero (the rigorous proof of this inequality requires an argument of stopping times and thus it is postponed to Step 3 below; also the proper definition of v 2 (t ∧ T max ) is given there).
Then, from (69) and Doob's inequality for the Itô integral, we deduce that, for each given deterministic time T > 0, (also the proof of this claim is given in detail in Step 3 below). This implies with probability one. Then necessarily T max = +∞, because in the opposite case, from lim t→Tmax x (t) = ±1, the supremum would be infinite. We have completed the proof that T max = +∞, with probability one, which includes in particular the claim that x (t) ∈ (−1, 1) for all t ≥ 0, with probability one.
Step 3. Let us prove (71). Let τ n be an increasing sequence of finite stopping times which converges almost surely to T max from below. Let σ n be the stopping time defined as (σ n = τ n if the set is empty). From inequality (70) we have Since |v(s)1 s≤σn | ≤ n, the Itô integral above is a martingale and thus its average is zero. Therefore, since t ∧ σ n ≤ t, One can check that lim n→∞ σ n = T max . If T max = ∞, lim inf n→∞ v 2 (t ∧ σ n ) = v 2 (t). If T max < ∞, namely when lim n→∞ t∧σ n = t∧T max , we do not know a priori that v can be prolonged with continuity at time T max (the solution (x, v) is defined only on the maximal interval [0, T max )). But lim inf n→∞ v 2 (t ∧ σ n ) < ∞ with probability one, by (77). Thus we define v 2 (t ∧ T max ) as this lim inf. Thus (77) is the correct meaning of (71). Let us now prove (72). Let σ n be the stopping time defined as (σ n = σ n if the set is empty). From (69) we have (since t ∧ σ n ≤ t) Again v(s)1 s≤σ n ≤ n, so t 0 v(s)1 s≤σ n dB(s) is a martingale. Hence, by Doob's inequality and Itô isometry formula, One can check that lim n→∞ σ n = T max . By monotone convergence, the right-hand-side of (80) converges to by (71). We have therefore seen that the right-hand-side of (80) is bounded by It remains to understand the limit of the left-hand-side of (80). One has hence again we may apply monotone convergence and get that the left-hand-side of (80) converges to This proves (72) and completes the proof of the theorem.
The above Theorem 4.1, despite the quite technical proof, unequivocally shows the following: under the action of the above-described potential U (x) as well as the viscous force, and by fully taking into account the mass m of the point P, the motion remains bounded independently of the particular value of the parameter q < 1. For the sake of completeness, it is worth mentioning that the stationary probability density of the vectorial process (x, v) is in the classical Boltzmann-like form: yielding:

A parametric extension of the TSB model leading to a first order SDE with bounded solutions
In this section we propose a one-parameter family of noises that includes as particular case the TSB noise. Indeed, let us consider the following family of forces ϕ α (x) which depend on a parameter α > 0: where: Figure 1: Graph of the potential U α in (95) for α = 0.7, α = 1, and α = 1.5. The potential is bounded for α < 1, and unbounded for α ≥ 1. However, the case α = 1 only yields a logarithmic growth near the boundaries, while the case α > 1 yields a polynomial growth.
According to the notations of Section 2, the corresponding full Newton's equation and the overdamped approximated equation (m 1) read as follows, respectively: The potential associated with (93) takes therefore the following form: where the constant has been chosen so that U α (0) = 0 for all α. Notice that, for α ∈ (0, 1), the above potential is bounded for finite values of x, so it does not form a potential well (Figure 1, blue line). The case α = 1 has been studied in detail in Sections 3 and 4: the potential forms in fact an infinite well but, in the case of the approximated equation (94), the behavior of the solution further depends on the parameter q. As far as the full Newton's equation is concerned, instead, the solution remains bounded independently of the value of q. Finally, the case α > 1 remains to be investigated. Preliminarily, we note that also in this case the potential U α (x) forms an infinite potential well, as illustrated in Figure 1. Second, we note that the proof of Theorem 4.1 can be used to show that the family of Newton's equations in (93), i.e.
gives rise to solutions which never leave the interval I = (−1, 1) for all positive times. Thus, in the case α > 1, it remains to check whether the same holds true also for the first order SDE (94). However, by looking back at the proof of Theorem 3.1, we see that a sufficient condition for the process not to reach the boundaries of its state space is that the scale function s α (x) associated with (94) explode at the boundaries. So, let us simply check that |s α (±1)| = ∞ under the hypothesis α > 1, where For the sake of simplicity, define β = 1 − q. We have: where the last equality precisely holds because α > 1. Likewise, s α (−1) = −∞. So, we can safely conclude that the boundaries ±1 are not reached if α > 1, and that the process remains therefore bounded in this case, without any further assumption on the magnitude of the constant diffusion σ. In particular, when α > 1, the process x α (t) solution of (94) is ergodic for any value of the parameter q < 1. Its stationary density can be easily derived as a time-invariant solution of the Fokker-Planck equation, which (up to normalisation constant) yields The mass of this density moves away from the boundaries of I = (−1, 1) as the value of α increases, as Figure 2 shows. Summarizing the results of this section, from an heuristic point of view we may say that the case α = 1 is at the interface between potentials with and without infinite wells, which yield bounded and unbounded solutions, respectively. The potential U α=1 (x) is itself infinite at x = ±1, but its growth is very slow since it is logarithmic. As a consequence, the boundedness of the solutions obtained under the overdamped approximation depends on the magnitude of the stochastic perturbation the point P of small mass is subject to.

Concluding Remarks
In this work we have showed that the Tsallis-Stariolo-Borland SDE, despite having an apparent stringent physical interpretation of overdamped stochastic motion of a point P in an infinite potential well, is able to generate unbounded noises for sufficiently large diffusion coefficient (namely for negative Tsallis parameter q). The explanation of this apparently unphysical and anti-intuitive behavior is that the overdamped first order SDE is a result of the overdamped approximation, which fails in our case. Indeed, we have showed that the full Newtonian equation describing the motion of the material point P is able to generate a genuinely bounded stochastic process for the position x(t) of the particle. We have also showed that the TSB case is at the interface between finite and infinite potential wells in a family of SDEs with potentials U α (x) which depend on a real positive parameter α. The properties of this family suggest that TSB potential might be a mathematical artifact separating two more physical scenarios, where for α ∈ (0, 1) the motion is unbounded due to the boundedness of the potential for finite x, and for α > 1 the motion is bounded both in presence and in absence of the overdamped approximation.