On the Approximation of Moments for Nonlinear Systems

—Model reduction by moment-matching relies upon the availability of the so-called moment . If the system is nonlinear, the computation of moments depends on an underlying speciﬁc invariance equation, which can be difﬁcult or impossible to solve. This note presents four technical contributions related to the theory of moment matching: ﬁrst, we identify a connection between moment-based theory and weighted residual methods. Second, we exploit this relation to provide an approximation technique for the computation of nonlinear moments. Third, we extend the deﬁnition of nonlinear moment to the case in which the generator is described in explicit form. Finally, we provide an approximation technique to compute the moments in this scenario. The results are illustrated by means of two examples.


I. INTRODUCTION
The theory behind model order reduction by moment-matching relies upon the notion of moment, which was originally conceived within an interpolation framework for linear systems described by differential equations, see e.g.[1].Subsequently, the definition of moment has been extended to a wider class of systems, see [2], [3], including nonlinear systems.A comprehensive review of the state-of-the-art on model reduction by moment-matching, including connections between the notion of moment introduced in [2], [3] and previously established definitions (such as those related to Krylov methods), can be found in [2], [4].Note that, apart from the notion of moment arising in systems theory, a different notion can be found in the field of probability (see [5], [6]).While the latter is not considered within the scope of this technical note, we note that recent efforts have been presented in [7] to bridge the gap between the notions of moment in probability theory and in systems theory.
The moment as defined in [2], [3] is strongly related to the steadystate output response of the interconnection between the system under analysis and a signal generator.When the system is nonlinear, the moment is essentially defined in terms of the solution of an invariance equation, which can be difficult or impossible to solve.In other words, the computation of a reduced model by moment-matching depends upon the availability of a suitable technique to approximate the corresponding moment.
In this study the moment-based theory as introduced in [2], [3] is connected to the classical formulation of weighted residual methods Nicol ás Faedo and John.V. Ringwood are with the Centre for Ocean Energy Research, Maynooth University, Co. Kildare, Ireland (e-mail: nicolas.faedo@mu.ie).
Giordano Scarciotti is with the Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, U.K. (e-mail: g.scarciotti@ic.ac.uk).
Alessandro Astolfi is with the Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, U.K., and also with the Department of Civil Engineering and Computer Science Engineering, University of Rome "Tor Vergata", 00133 Rome, Italy (e-mail: a.astolfi@ic.ac.uk).
(WRMs), i.e. spectral (Galerkin) and pseudospectral (collocation) methods, see e.g.[8], [9].The family of WRMs aims to compute approximate solutions of differential equations by expanding the system variables in a set of basis functions to then minimise a particular (approximation) error function termed residual.These methods have been successfully applied to a variety of problems in different applications, including, for instance, numerical approximation of solutions for the Navier-Stokes equations [8], [10].
This note provides four technical contributions related to the framework of moment matching.First, we formalise a connection between moment-based theory and the family of WRMs.Second, inspired by this result, we propose a method to approximate the moment of a nonlinear system driven by signal generators in implicit form (loosely speaking, generators described by differential equations).We note that approximation methods for the moment of a nonlinear system driven by this class of inputs have been studied in [11].However, since [11] relies on computations on the steady-state response, [11] assumes local exponential stability of an equilibrium point of the underlying system, which is not required for the methods proposed in this note.The third contribution of the note is to extend the moment-based framework to nonlinear systems driven by signal generators in explicit form (loosely speaking, generators not necessarily described by differential equations).In particular, we focus on periodic and potentially discontinuous inputs (motivated by the existence of a large number of applications in which this class of signals occurs).Note that, up until this point, this explicit framework was only defined for linear systems, see [12].Finally, combining all the previous results, we propose a method to approximate the moment of a nonlinear system driven by generators in explicit form.
We briefly mention that while our contribution is technical and strictly related to the definition and computation of moments, these technical contributions allow computing and defining new classes of reduced order models.These models, which are essential for a variety of engineering applications (see e.g.[1]), directly motivate this work.Moreover, we note that the contributions of this note can also be potentially used to compute approximate solutions to optimal control problems, following the moment-based control framework presented in [13], [14].
The remainder of this note is organised as follows.Section II briefly recalls both the theory behind moments for systems driven by implicit signal generators and the theory of WRMs.Then a connection between those methodologies is formalised.Section III proposes a method to approximate the moment of a nonlinear system driven by an implicit signal generator.Section IV formalises the definition of the moment of a nonlinear system driven by an explicit signal generator and proposes a method to approximate such a moment, with particular focus on periodic discontinuous signals.Finally, conclusions are presented in Section V.

A. Notation and preliminaries
Standard notation is used throughout this study, with some exceptions detailed in this preliminary section.R + (R − ) denotes the set of non-negative (non-positive) real numbers.C 0 denotes the set of purely imaginary complex numbers and C <0 denotes the set of complex numbers with a negative real part.The symbol Nq indicates the set of all positive natural numbers up to q, i.e.Nq = {1, 2, . . ., q}.The symbol 0 stands for any zero element, according to the context.The symbol In denotes the identity matrix of size n.If x is a real-valued row/column vector, then x i ∈ R denotes the i-th element of x.We write a matrix X ∈ R n×m element-wise as X = [x ij ] n,m .The spectrum of a matrix A ∈ R n×n , i.e. the set of its eigenvalues, is denoted as σ(A).The symbol n i=1 denotes the direct sum of n matrices, i.e.
is a symmetric matrix, the expression F 0 means that F is positivedefinite.The symbol L{f (t)} denotes the Laplace transform of the function f (provided that f is Laplace transformable) and, abusing the notation, σ(L{f (t)}) denotes the set of poles of L{f (t)}.The set of square integrable functions on the interval Ξ ⊂ R is denoted as L2 (Ξ).The Kronecker product between two matrices M 1 and M 2 is denoted as M 1 ⊗ M 2 .The symbol εn ∈ R n×1 denotes a vector with entries in odd positions equal to 1 and even positions equal to 0. Finally, we recall a definition from [2].
Definition 1: Let x, with x(t) ∈ R n be the state of the dynamical system1 Σ, and u, u(t) ∈ R, be the input of Σ.Let t 0 and x 0 = x(t 0 ) be the initial time and the initial state, respectively.If there exists a we call equation (1) the representation in explicit form, or the explicit model, of Σ. Assume that Φ(t, t 0 , x 0 , u) has a continuous derivative with respect to t for every t 0 , x 0 and u, and there exists a function We call the differential equation (2) the representation in implicit form, or the implicit model, of Σ.

II. MOMENT-BASED THEORY AND WEIGHTED RESIDUAL METHODS
This section briefly recalls the notion of moment for systems driven by generators in implicit form.We then specialise this notion to linear systems to formalise a connection between moment-based theory and the family of WRMs.
Finally, we recall a result which, introducing additional assumptions, connects the definition of moment with the steady-state response of the output of the interconnected system (5).
Assumption 3: The signal generator ( 4) is such that all eigenvalues of S are simple and with zero real part.
Remark 1: While moments can be naturally defined for nonlinear signal generators in implicit form (see [2], [3]), this is beyond the scope of this technical note: the class of input signals which motivates the technical contributions presented herein are captured by the linear generator (4) (see also Remark 3), and the implicit form generator (33), discussed in Section IV.

B. The special case of linear systems
Suppose f nl (x, u) = 0 and h nl (x) = 0 in (3).The assumptions required in the nonlinear case to formalise the definition of moment are less restrictive when the mapping f is purely linear, as detailed in [2], [3] and briefly recalled in the following.

C. Weighted residual methods and moments
Let Ξ be a closed interval in R. The basic idea behind the family of WRMs relies on the selection of a complete set {ψ k } of orthogonal functions ψ k : Ξ → R : t → ψ k (t) defined on a function space H with domain Ξ, and with the inner-product on the space H defined as 6p(t), l(t) where p ∈ H , l ∈ H and w : Ξ → R is a weighting function.The standard assumption in WRMs (see [8,Chapter 1]) is that the state vector and the control input in (3) admit the expansions where xij ∈ R, ûj ∈ R, with j ∈ N M , denote, respectively, the coefficients of the expansion of x i and u.Note that M may be infinity.
Defining the set {ψ k (t)} N k=1 with N < M , it is possible to write the corresponding N -dimensional approximations of x and u, denoted x N and u N , respectively, as where Defining the matrix X = X 1 , . . ., X N ∈ R n×N , and substituting (10) into the dynamic equation ( 3), the residual function can be defined, in which the approximated time-derivative of the state ẋN is given by ẋN Then, given values of U , the approximated state trajectory is computed in terms of the nN unknown coefficients of X as the solution of the system of n 2 N 2 algebraic equations7 for i ∈ Nn and j ∈ N N , where R = [R 1 , . . ., R n ] and the test functions ζ j , assumed to be sufficiently regular, form an orthogonal set {ζ j } N j=1 .If the test functions ζ j are elements of the same set as the basis functions approximating the state, that is ζ j = ψ j , then the method is known as spectral or Galerkin method.If the test functions are translated Dirac-delta functions δ t j = δ(t − t j ), then the method is known as pseudospectral or collocation method, and the points t j are called collocation points.From now on, we focus our study on the Galerkin method, since the collocation approach can be made equivalent to Galerkin method by an appropriate selection of the set {t j } ⊂ Ξ, see e.g.[8,Chapter 4].
We now present a connection between WRMs and moment-based theory in terms of the solution of the Sylvester equation (7).To this end, we assume that in (3) f nl (x, u) = 0 and h nl (x) = 0, i.e. the system dynamics are linear.
Remark 2: If f nl (x, u) = 0 and h nl (x) = 0, then the approximated output of system (3) can be computed as y N (t) = Y Ψ(t) where Y = CX, with X solution of (14).
Proposition 1: Consider system (3) with f nl (x, u) = 0 and h nl (x) = 0, and the signal generator (4).Suppose that Assumption 2 holds and that u admits an expansion as in (10).Let Ψ(t) = ω(t).Then, the coefficients of the solution Y computed using the Galerkin method coincide with the elements of CΠ.
Proof: Consider system (3) with f nl (x, u) = 0 and h nl (x) = 0, and the approximating state and input vectors XΨ(t) and U Ψ(t), respectively.The residual equation defined in (12) can be written as and the approximating trajectory x N can be computed in terms of X solving the equation Assume now that the vector of basis functions Ψ(t) belongs to the class of functions generated by (4), i.e. we assume Ψ(t) = ω(t).Then Ψ(t) = ω(t) = Sω(t) and, considering the superposition property of the inner product, equation ( 16) can be written as Defining Ω = ω(t), ω(t) , the proof follows once noted that 0 / ∈ σ(Ω) since Ω = Ω 0 under the excitability condition on the pair (S, ω(0)) (see Assumption 2).In particular, Y = CX = CΠ, where Π is the unique solution of (7).
Inspired by Proposition 1 we present, in the following sections, a general framework to approximate the moment of nonlinear dynamical systems driven by a wide class of input signals.Furthermore, we show that, under additional assumptions, the methods described in this note can be used to approximate the steady-state behaviour of system (3) driven by this general class of inputs.
Remark 3: The class of input signals defined by ( 4) contains some of the most widely-used basis functions, such as polynomials and trigonometric functions.For instance, the first ν polynomial functions can be generated with S in (4) such that S = Nν , where Nν ∈ R ν×ν is a matrix with ones in the upper diagonal and zeros elsewhere.To address more general cases, let λ ∈ λ(S) ⊂ C, and let p be the dimension of the largest Jordan block associated with λ.Then, for each λ, the signal generator (4) can generate linear combinations of the set of functions {t q e λt } p−1 q=0 .Remark 4: The methods proposed in Sections III and IV compute the moment for a particular trajectory ω(t).We note that, if required, the methods can be modified to incorporate different ω(t), following an analogous procedure to that used in the so-called "U/N" variation proposed in [11].

III. APPROXIMATION OF MOMENTS: THE IMPLICIT SIGNAL GENERATOR CASE
In this section we propose a method to approximate the moment of the nonlinear system (3) driven by the signal generator (4).For this purpose we introduce an additional signal generator.We note that the assumptions considered in this section on the function π (solution of (6)) resemble those in [11].
To begin with, consider the "extended" signal generator described by the set of first order differential equations where with N ≥ ν integer, and the matrix with any matrix S * e such that the pair (Se, ωe(0)) is excitable (in line with Assumption 2).
Assumption 5: The elements of the set Fω e ⊂ H , where H is a complete inner-product space with (closed) domain Ξ ⊂ R, are orthogonal in the interval Ξ.Moreover, each component of the function π, i.e. π k , k ∈ Nn, which solves (6), belongs to H . Remark 6: Under Assumption 5, one can always extend the set of functions Fω e to form an orthogonal basis of H by simply considering the orthogonal complement of the subspace spanned by the elements of Fω e (see, for instance, [19] and [20]).Assumption 5, together with Remark 6, directly imply that π k can be expanded as [19] where for some mapping k : R N → R. The existence of k such that (20) holds follows directly from Remark 6.
Remark 7: Note that, under Assumption 5, we can always write π as the sum of two contributions, namely where In the following we proceed to formulate a method which allows the computation of an approximation of π in terms of the set Fω e , i.e. in terms of the N -dimensional expansion Πωe.We do this by minimising a residual equation analogously to what done for the family of WRMs (see Section II-C).
Then we compute the approximating solution in terms of Π by forcing the residual (23) to be orthogonal to the N -dimensional space spanned by the set Fω e .Equation (22) follows after considering the superposition property of the inner product.Note that Proposition 2 makes explicit use of the extended signal generator (18) to compute an N -dimensional approximation of π in terms of the set Fω e by projecting the residual equation ( 23) into the same set of functions, analogously to what done in the family of WRMs.In other words, the extended signal generator, which defines the set Fω e , generates the function space used to approximate the corresponding moment (see also Remark 9).
as a direct consequence of the fact that the elements of the set Fω e are orthogonal in H [19]. Corollary 1 implies that, if E = 0 in (20), i.e. π can be exactly written as Πωe, then the approach of Proposition 2 effectively recovers the exact solution Π.
Remark 8: If, additionally, Assumption 3 holds and the zero equilibrium of the system ẋ = f (x, 0) is locally exponentially stable, then h( Πωe), with Π computed as in Proposition 2, approximates the steady-state response of the output of the interconnected system (5) (see Theorem 1).
We now re-write the system of algebraic equations ( 22) of Proposition 2 in a more convenient form, in which the contribution of the linear and nonlinear parts of system (3) can be easily identified.
and f nl i is the i-th row of the mapping f nl .
Remark 9: The term F nl ( Π, Le)Ω −1 in (26) is the projection of f nl ( Πωe, Leωe) onto the function space spanned by the set Fω e .
Corollary 2 provides the approximating solution in a very specific form: the linear Sylvester equation ( 7) appears explicitly with a nonlinear "correction" term.
Remark 10: If f nl (x, u) = 0, then F nl ( Π, Le) = 0, and the solution of (26) can be straightforwardly computed as Π = Π l = [ Π 0], with Π the solution (provided it exists) of the Sylvester equation A Π − ΠS = −BL.In other words, if system (3) is linear, then the solution of the proposed method coincides with that of the classical approach (discussed in Section II-B).
Remark 11: State-of-the-art numerical routines, such as those described in [21], can be readily used to solve the homogeneous system of algebraic equations (26) in Π.
Remark 12: The choice of an appropriate initial guess Π 0 is crucial for the fast convergence of any numerical routine when solving (26).In most cases a sensible initial point is the solution of (26) with f nl (x, u) = 0 in (3), i.e. we select Π 0 = Π l , with Π l as in Remark 10.
Remark 13: From the previous remark we note that the use of the "phantom" input, which does not appear in the linear part (because of the block of zeros in Le = [L 0]), is instrumental to interpolate the nonlinear term in equation (22).
To conclude this section, we illustrate the proposed method by means of the following example.
Example 1: Consider the forced Van der Pol oscillator with a nonlinear output map described by the differential equations where u(t) = Au cos(f 0 t), with Au ∈ R and f 0 ∈ R + \ {0}.We can write u using the signal generator Note that, with this selection of matrices, the triple (L, S, ω(0)) is minimal.For this example, the set describing the input signal is Fω = {ω 1 , ω 2 } = {cos(f 0 t), − sin(f 0 t)}.Following (18), we define the extended signal generator with matrices where N = 2k, with k ∈ N. Note that the pair (Se, ωe(0)) is excitable and Fω ⊂ Fω e .The parameters of u are selected as Au = 1 and f 0 = 1 2 .Given that, for this example, the origin of ẋ = f (x, 0) is locally exponentially stable and the matrix S of the signal generator (31) has all simple eigenvalues with zero real part, the assumptions of Theorem 1 are fulfilled, and hence the moment of (28) computed along a particular trajectory ω(t) coincides with the steady-state response of the output of the interconnected system.Thus, the moment of the system can be computed along ω(t) using a numerical integration method.This allows for a direct comparison with the approximated moment.
Note that, defining Ξ = [t, t + T 0 ], with T 0 = 2π/f 0 , implies that the elements of the set Fω e = {ωe i } 2k i=1 are orthogonal on L 2 (Ξ) under the inner product definition in (8) with weighting function w(t) ≡ 1.Furthermore, note that Assumption 5 holds for this example H = L 2 (Ξ), as discussed in the following.Given the nature of the signal generator defined in (29) the input u is always T 0periodic.Moreover, since the zero equilibrium of ẋ = f (x, 0) is locally exponentially stable, the (well-defined) steady-state solution of the interconnection between (28) and ( 29) is T 0 -periodic [22, Section VI], i.e. xss(t) = xss(t − T 0 ).Since, as discussed in the previous paragraph, Theorem 1 holds, i.e. xss(t) = π(ω(t)), it is straightforward to conclude that each element of the mapping π belongs to L 2 (Ξ).
To illustrate the method proposed, we compute Π as in ( 26), for a different number of basis functions {ωe i }, with ωe as in (31).The algorithm used to solve (26) is based on the interior-reflective Newton method described in [21].

IV. APPROXIMATION OF MOMENTS: THE EXPLICIT SIGNAL GENERATOR CASE
The mathematical formalism behind moments has been extended to the case in which system (3) is linear and the input is given in explicit form [12], [23].This provides an extension of the moment-based framework for a very general class of inputs, including discontinuous periodic signals.Motivated by this, in this section we first provide a further extension of the notion of moment to nonlinear systems driven by explicit signal generators.Then, based and inspired by the methods developed in Section III, we propose a method to compute an approximation of such a moment.

A. Moment for nonlinear systems at explicit signal generators
From now on we focus our interest on signals described by a T -periodic explicit form signal generator as t ≥ T , where the matrix Λ(t) ∈ R ν×ν is non-singular for all t ∈ R + .
Analogously to the implicit signal generator case of Section II-A, we now introduce a set of assumptions required to formalise the definition of moment.
Definition 4: Consider system (3) and the signal generator (33).Suppose Assumptions 6 and 7 hold.Then we call the function h • π the moment of system (3) at (Λ, L).Analogously to the linear case discussed in [12], [23], defining the moment of system (3) according to Definition 4 is justified by the equivalence, when an implicit model of ( 33) is available, between the new and the classical definition (see Definition 2).
Assumption 8: ω Λ (t) in ( 33) is Laplace transformable and such that σ(L{ω Λ (t)}) ⊂ C 0 .Assumption 7 plays the role of the minimality condition of Assumption 2 for the implicit signal generator case, while Assumption 8 corresponds to the persistence of excitation condition of Assumption 3. We now formulate a proposition which relates the moment as in Definition 4 with the steady-state output response of system (3) driven by (33).
Proof: We begin by noting that the input u is bounded and periodic by Assumption 8.Then, since the zero equilibrium of ẋ = f (x, 0) is locally exponentially stable, the steady-state response xss is locally well-defined [22, Section VI] and, using the well-known "variation of parameters formula", can be written as The proof follows by noting that xss(t) = π(t, ω Λ ) in (34) and hence yss(t) = h(π(t, ω Λ )).
In the following, we explicitly exploit the T -periodicity of ω Λ to simplify the computation of the mapping π for the case in which the zero equilibrium of ẋ = f (x, 0) is locally exponentially stable.
Corollary 3: Suppose Assumptions 7 and 8 hold and the zero equilibrium of system ẋ = f (x, 0) is locally exponentially stable.Then given that (33) is T -periodic, equation (34) becomes with P ∈ R n×n a constant matrix defined as P = In − e AT −1 .
In other words, the result of Corollary 3 indicates that (under the above assumptions) the moment of (3) at (Λ, L) can be fully described by computing (34) over only one time period T .

B. Approximation of Π(t) driven by (Λ, L)
In this section we present a method to approximate the moment of a nonlinear system (3) driven by an explicit signal generator (33) (as formalised in Definition 4).To achieve this, we introduce the following assumption.
Assumption 9: The set where and Note that for this explicit signal generator case, we propose to approximate the matrix valued function Π directly, and then reconstruct the moment as π(t, ω Λ (t)) = Π(t)ω Λ (t).
Remark 16: Under Assumption 9, Π(t) can be written as where Γ Π ∈ R n×N ν is given by where As in the implicit signal generator case, we now propose a method to compute an N -dimensional approximation Π(t) of Π(t) based on a residual equation.This is considered in the following proposition.
Proposition 4: Consider the nonlinear system (3) driven by the explicit signal generator (33).Suppose that Assumptions 6, 7 and 9 hold.Then the moment of system (3) at (Λ, L) can be approximated as h( Π(t)ω Λ (t)), where Π(t) = Γ Π (Iν ⊗ Ψ(t)) and Γ Π is the solution of the system of algebraic equations where Proof: We omit the proof since it is analogous to that of Proposition 2, using the integral equation (34) and Remark 14.
Remark 17: If, additionally, Assumption 8 holds and the zero equilibrium of ẋ = f (x, 0) is locally exponentially stable, then h( Π(t)ω Λ (t)), with Π(t) computed as in Proposition 4, approximates the steady-state output response of system (3) driven by (33).Additionally, exploiting the periodicity of the steady-state and following the result of Corollary 3, it is possible to compute Π l and Π nl in Proposition 4 as and hence Π can be fully described using only information over one period.
Corollary 4: Let Assumptions 6, 7 and 9 be satisfied.Then, the system of algebraic equations (41) can be equivalently written in matrix form as where the matrices ∆ ∈ R νN ×νN and Π Ψ l , Π Ψ nl ∈ R n×νN are defined as Proof: Note that, considering the orthogonality of the set F Ψ = {ψ i } N i=1 under Assumption 9, the matrix ∆ can be explicitly written as , where • denotes the norm induced by the inner-product defined in (8).Then, it follows from the minimality condition of Assumption 7 and the invertibility of Λ(t) for t ≥ 0, that ω Λ i = 0 for all i ∈ Nν and then ψ j ω Λ i 2 > 0 for all j ∈ N N and i ∈ Nν .Hence, 0 / ∈ σ(∆) since ∆ = ∆ 0 and the proof follows.Analogously to the implicit case presented in Corollary 2, equation ( 44) is decomposed in a specific form, in which the contribution of the linear solution Π l appears explicitly, together with a nonlinear "correction" term.
Remark 18: As in the case of equation ( 26), a sensible choice for the initial guess Γ Π 0 secures a fast convergence rate when solving (44).This can be chosen in terms of the linear solution of (44), i.e.Γ Π 0 = Π Ψ l ∆ −1 .To conclude this section, we provide an example that illustrates the applicability of the proposed method.
Example 2: Consider the nonlinear resonant inverter circuit depicted in Figure 2, with dynamics described by the differential equations where the voltage at the nonlinear resistor is given by N (i l = R(i l + αi 2 l ), with α > 0. The input u is a switching function which is given by a square wave with angular frequency ω , described in explicit form as [23] ω where (t) = sign(sin(t)), with sign(0) = 0. We select the parameters L and C of the system (46) as in [23] and the nonlinear resistor coefficient is set to α = 1.5.Note that the signal generator (47) satisfies all the assumptions of Definition 4. In addition, (47) satisfies Assumption 8 and the zero equilibrium of ẋ = f (x, 0) is locally exponentially stable, hence the result of Proposition 3 holds, and the moment of system (46) driven by (47) coincides with the steady-state output response of such an interconnected system.
Given the periodic nature of the input we select trigonometric polynomials as orthogonal set of basis functions in Ξ = [t; t + T ], with T = 2π/ω , i.e. the set {ψ i } is chosen as F Ψ = 1 ∪ {cos(pω t), sin(pω t)} k p=1 .Figure 3 shows the time history of h(π(t, ω Λ (t))) computed using a Runge-Kutta method, and the time history of h( Π(t)ω Λ (t)) considering a different number of basis functions in the set F Ψ , i.e. k = {10, 20, 100}.The large number of components to successfully approximate the moment relates to the discontinuous nature of the problem.In fact, it can be seen in Figure 3 (bottom) that the absolute value of the approximation error becomes higher at the points where ω Λ (t) is discontinuous, though improving with increasing k.

V. CONCLUSIONS
This technical note presents a framework to approximate the moment of nonlinear systems driven by signal generators.The methods proposed are inspired by a connection between moment-based theory and the family of WRMs.This note formalises and exploits this connection to propose a set of methods to approximate the moment of a nonlinear system driven by an implicit signal generator.Furthermore, we present the formal definition of moments for nonlinear systems driven by explicit signal generators and we propose a method to compute such moments, extending the applicability of the framework to a very general class of inputs, including periodic discontinuous sources.While our contributions are technical and strictly related to the definition and computation of moments, these contributions allow the computation and definition of new classes of reduced order models, following the system-theoretic approach to nonlinear model reduction by moment-matching presented in e.g.[2].For instance, using the contributions of this note, one can define and compute a family of reduced models achieving moment-matching at an explicit signal generator for a general class of nonlinear systems, extending the framework presented in [2] for linear dynamical systems driven by this class of input signals.Finally, we note that the methods proposed in this note are illustrated by means of simple examples.
This work was supported by the Science Foundation Ireland under Grant No. SFI/13/IA/1886 and the Royal Society International Exchange Cost Share programme (IEC\R1\180018).This work has been partially supported by the European Union's Horizon 2020 Research and Innovation Programme under grant agreement No 739551 (KIOS CoE).

Fig. 1 :
Fig. 1: Top: time histories of the approximated moment for the forced Van der Pol oscillator driven by the signal generator (31) for different values of k.Bottom: time histories of the corresponding absolute errors.

Fig. 3 :
Fig. 3: Top: time histories of the approximated moment for the nonlinear resonant converter driven by the signal generator (47), for different values of k.Bottom: time histories of the corresponding absolute errors (logarithmic scale).
and where H * is complete inner-product space with (closed) domain Ξ ⊂ R, is a complete orthogonal set in Ξ and each component of the mapping Π(t) = [Π(t) ij ] n,ν belongs to H * , i.e. it can be expressed as a unique linear combination of the set F ψ M as