Adaptive Control for Systems With Time-Varying Parameters

This article investigates the adaptive control problem for systems with time-varying parameters using the so-called congelation of variables method. First, two scalar examples to illustrate how to deal with time-varying parameters in the feedback path and in the input path, respectively, are discussed. The control problem for an $n$-dimensional lower triangular system via state feedback is then discussed to show how to combine the congelation of variables method with adaptive backstepping techniques. To achieve output regulation problem via output feedback, problem which cannot be solved directly due to the coupling between the input and the time-varying perturbation, the ISS of the inverse dynamics, referred to as strong minimum-phaseness, is exploited. This allows converting such coupling into the coupling between the output and the time-varying perturbation. A set of filters, resulting in ISS state estimation error dynamics, are designed to cope with the unmeasured state variables. Finally, a controller is designed based on a small-gain-like analysis that takes all subsystems into account. Simulation results show that the proposed controller achieves asymptotic output regulation and outperforms the classical adaptive controller, in the presence of time-varying parameters that are neither known nor asymptotically constant.

(see e.g., [6]) exploit persistence of excitation to guarantee stability by ensuring that parameter estimates converge to the true parameters. 1 Subsequent works (see e.g., [9], [10]) have removed the restriction of persistence of excitation by requiring bounded and slow (in an average sense) parameter variations.
More recent works can be mainly categorized into two trends. One of them is based on the so-called robust adaptive law or switching σ-modification (see [3]) a mechanism which adds leakage to the parameter update law if the parameter estimates drift out of a prespecified reasonable region to guarantee boundedness of the parameter estimates. This approach achieves asymptotic tracking when the parameters are constant, otherwise the tracking error is nonzero and related to the rates of the parameter variations (see [11]). In [12] and [13], the parameter variations are modeled in two parts: known parameter variations and unknown variations, and the residual tracking error only depends on the rates of the unknown parameter variations.
The other trend exploits the so-called filtered transformation, which is essentially an adaptive observer described via a change of coordinates, and the projection operation, which confines the parameter estimates within a prespecified compact set to guarantee the boundedness of the parameter estimates (see [14], [15] and [16]). These methods can guarantee asymptotic tracking provided that the parameters are bounded in a compact set, their derivatives are L 1 and the disturbance on the state evolution is additive and L 2 . Moreover, a priori knowledge on parameter variations is not needed and the residual tracking error is independent of the rates of the parameter variations.
The methods mentioned above cannot guarantee zero-error regulation when the unknown parameters are persistently varying. To achieve asymptotic state/output regulation when the time-varying parameters are neither known nor asymptotically constant, in [17] and [18] a method called the congelation of variables has been proposed and developed on the basis of the adaptive backstepping approach and the adaptive immersion and invariance (I&I) approach, respectively. In the spirit of the congelation of variables method each unknown time-varying parameter is treated as a nominal unknown constant parameter perturbed by the difference between the true parameter and the nominal parameter, which causes a time-varying perturbation term. The controller design is then divided into a classical adaptive control design, with constant unknown parameters, and a damping design via dominance to counteract the timevarying perturbation terms. This method is compatible with most adaptive control schemes using parameter estimates, as it does not change the original parameter update law designed for time-invariant systems.
Since full-state feedback is not always implementable, most practical scenarios require an output-feedback adaptive control scheme. In the output-feedback design with the congelation of variables method, the major difficulty is caused by the coupling between the input and the time-varying perturbation. In this case simply strengthening damping terms in the controller alters the input (as well as the perturbation itself) and therefore causes a chicken-and-egg dilemma, which prevents stabilization via dominance. In [19] and [20], a special output-feedback case is solved on the basis of adaptive backstepping and adaptive I&I, respectively, by exploiting a modified minimum-phase property for time-varying systems and decomposing the coupling between the input and the time-varying perturbation into couplings between some output-related nonlinearities and some other time-varying perturbations, which enables the use of the dominance design again, though it is still restricted by a relative degree condition. This restriction is relaxed in this article.
The article is organized as follows. In Section II, two motivating examples of scalar systems to illustrate the use of the congelation of variables method are presented, and an n-dimensional lower-triangular system with unmatched uncertainties controlled by an adaptive state-feedback controller is discussed to elaborate on the combination of the congelation of variables method with adaptive backstepping. With these design tools, in Section III the article revisits, integrates, and further develops the results in [19] and [20] on the decomposition of the perturbation coupled with the input, and proposes a controller design based on the scheme in [21] together with a more comprehensive small-gain-like analysis, when compared with the one in [19] and [20], that incorporates the filter subsystems into the analysis. These allow the output-feedback scheme proposed in this article to achieve asymptotic output regulation and to guarantee boundedness of all closed-loop signals and, at the same time, remove the restriction of having relative degree 1 or constant high-frequency gain, as assumed in [19] and [20]. In Section IV a numerical example to highlight the performance improvement achievable with the proposed scheme is presented.
Notation: This article uses standard notation unless stated otherwise. For an n-dimensional vector v ∈ R n , |v| denotes the Euclidean 2-norm, ij denotes the Frobenius norm. I and S denote the identity matrix and the upper-shift matrix with proper dimension, respectively. For an n-dimensional time-varying signal s : R → R n , the image of which is contained in a compact set S, Δ s : R → R n denotes the deviation of s from a constant value s , i.e., Δ s (t) = s(t) − s , and δ s ∈ R denotes the supremum of the 2-norm of s, i.e., δ s = sup t≥0 |s(t)| ≥ 0. (·) (n) = d n dt n denotes the nth time derivative operator.
In this article the unknown time-varying system parameters θ : R → R q and b m : R → R may verify one of the assumptions below.

Assumption 2. (Smooth bounded parameters):
The parameter θ is smooth, that is, θ (i) (t) ∈ Θ i , for i ≥ 0, for all t ≥ 0, respectively, where Θ i are compact sets possibly unknown. δ Δ θ is assumed to be known.

II. MOTIVATING EXAMPLES AND PRELIMINARY RESULT
In this section two motivating examples are provided to briefly introduce the so-called congelation of variables method, which is the core idea of this article for coping with time-varying parameters.

A. Parameter in the Feedback Path
To begin with, consider a scalar nonlinear system described as follows:ẋ where x(t) ∈ R is the state, u(t) ∈ R is the input, and θ(t) ∈ R is an unknown time-varying parameter satisfying Assumption 1. Assuming that we have an "estimate"θ of the parameter θ(t), we can rewrite (1) aṡ One way to design an update law forθ is to consider a Lyapunov function candidate of the form Assuming θ is differentiable with respect to time for the time being and taking the time derivative of V along the solutions of (2) yieldṡ which means that the selection of the parameter update laẇ cancels the effect of the unknown (θ −θ)x 3 term. The constant γ θ > 0 is known as the adaptation gain. In classical adaptive control problems one assumes that θ is constant, that isθ = 0 for all t ≥ 0, and selects the control law with k > 0, which yieldsV = −kx 2 ≤ 0. We can conclude from this that x andθ are bounded, and x converges to 0 by invoking Barbalat's lemma. Whenθ = 0, one has to deal with the indefinite term (θ −θ)θ γ θ . One way to do this is to modify (5) with the so-called projection operation (see e.g., [22], [23]), which confines the parameterθ inside a convex compact set and therefore guarantees the boundedness of (θ −θ). It follows that the boundedness ofθ guarantees the boundedness of x (either exact boundedness, e.g., in [24] or boundedness in an average sense, e.g., in [10]), andθ ∈ L 1 guarantees the convergence of x to 0 (e.g., in [15], [25], and [16]). In some other works (e.g., in [11], [12], and [13]), the boundedness ofθ is guaranteed by the so-called switching σ-modification, which adds some leakage to the integrator (5) if the parameter estimate drifts outside a reasonable region, and it is often referred to as soft projection. All these schemes share the similarity that they treaṫ θ as a disturbance. As a result some disturbance attenuation effort is made to guarantee that boundedθ causes bounded state/output regulation/tracking error, and sufficiently fast convergingθ, which means that θ becomes constant sufficiently fast, guarantees the convergence of the error to 0. As a result, none of these methods can guarantee zero-error regulation/tracking when the unknown parameter is persistently time varying, in which caseθ is nonvanishing. However, note that the reason why we cannot avoidθ in the analysis is the θ −θ term in (3). This term is included only to guarantee the boundedness ofθ, yet by no means guaranteeing the convergence ofθ-θ, no matter whether θ is time-varying or constant, thus replacing θ with a constant θ , to be determined, can guarantee the same properties. θ can be regarded as the average of θ(t), which is not necessarily known. In the light of this, consider the modified Lyapunov function candidate Taking the time derivative of V along the trajectories of (2) yieldṡ where Δ θ = θ − θ . Comparing (8) with (4) we see that the substitution of θ for θ eliminates theθ term, at the cost of adding a perturbation term Δ θ x 3 due to the inconsistency between θ and θ . Considering the same parameter update law as in (5) and a new control law where Δ θ > 0 is a constant, to balance the linear and the nonlinear terms, yieldṡ Therefore we can conclude boundedness of all trajectories of the closed-loop system as well as convergence of x to 0 using the same argument as the one used in the classical constant parameter problem, without requiring a vanishingθ. The method of substituting the constant θ for the time-varying θ to avoid unnecessary time derivatives is called congelation of variables [17] 2 . Note that controllers designed via the congelation of variables method can be used for systems with fast-varying parameters, as the design does not rely on properties ofθ. Remark 1: The control law (9) and the parameter update law (5) do not depend on θ , in the same way as classical adaptive controllers do not depend on θ, thus showing the "adaptive" property of the proposed mechanism. One can interpret the proposed controller as a combination of an adaptive controller, to cope with the unknown parameter θ , and a robust controller, to cope with the time-varying perturbation Δ θ (t). This fact can also be revealed by noting that, when θ is a constant, one could select θ = θ, hence δ Δ θ = 0, and the control law (9) is reduced to the classical control law (6).
It is also worth discussing the difference between the proposed adaptive control scheme and a pure robust control scheme in which θ is treated as nominal parameter. To illustrate this consider a practical scenario in which we have a circuit that has to work with one of three resistors with values 50, 100, and 150 Ω, yet which one is used is unknown. In addition, due to temperature variations, the resistances have a fluctuation of ±10 Ω. In the spirit of the proposed method, θ equals either 50, 100, or 150 Ω, which is unknown and not used in the controller design, as it is replaced by the dynamically updatedθ, and δ Δ θ = 10 Ω, which is known and used in the controller design. In the spirit of robust control, one has to determine the nominal resistance of the resistor before designing the controller, and according to the known information, the best guess is θ = 100 Ω. In this case the maximum deviation from this nominal value is δ Δ θ = 60 Ω, which is caused not only by the parameter variation but also by the imperfect knowledge on the true resistance of the resistor used. This leads to a more conservative design that uses an unnecessarily high gain and may cause severe noise amplification issues. In an extreme case in which the nominal resistance of the used resistor is completely unknown, one cannot design a robust controller while one can still design an adaptive controller using the proposed method.
Remark 2: The control law (9) depends on δ Δ θ , which is assumed to be known by Assumption 1. Even if δ Δ θ is unknown, one can easily overcome this by building an "estimate" for δ Δ θ via classical adaptive control techniques, since δ Δ θ is a constant and the control law is linearly parameterized, see Remark 2 of [21] for a brief example.
Remark 3: It is worth introducing a convention to clarify the spirit in which we treat unknown quantities. If an unknown indefinite term in the time derivative of the Lyapunov function vanishes as the system parameters become constant, then this term is to be dominated by a static damping design, like the Δ θ -term in this case, and we do not aim at estimating δ Δ θ , the bound of Δ θ (t); if an unknown indefinite term is not vanishing even when all system parameters are constant, like the θ -term in this case, then this term is to be compensated by a dynamically updated "estimate," which isθ in this case. The reasons for this convention of design are, first, that we do not want to overextend the dimension of the closed-loop system by adding too many dynamic estimates, and second, that we need the static damping terms to counteract fast parameter variations for better transient performance (for the same reason one can use nonlinear damping techniques even for system with constant parameters).
Remark 4: Consider the classical adaptive control problem in which θ is constant. The closed-loop dynamics can be described via a negative feedback loop consisting of two passive systems, namely where The storage functions are S 1 = 1 2 x 2 1 and S 2 = 1 2γ θ x 2 2 , respectively. It is well known that the parameter update law (5) is neither designed to guarantee the convergence ofθ − θ to 0 nor to makeθ estimate θ, thoughθ is called the parameter estimate by convention, but to makeθ − θ an input/output signal to form a passive interconnection. When θ is time varying, the dynamics of Σ 2 are described by which causes the loss of passivity from u 2 to y 2 . The congelation of variables method can therefore be interpreted as selecting a new signalθ − θ that can yield a passive interconnection, while maintaining the passivity of Σ 1 by strengthened damping. Within this framework, the two passive systems are described by

B. Parameter in the Input Path
In what follows we show how to extend the idea of congelation of variables to systems in which a time-varying parameter is coupled with the input by considering the nonlinear systeṁ where θ(t) satisfies Assumption 1 and b(t) ∈ R satisfies Assumption 1 and Assumption 3. Equation (16) can be rewritten asẋ , and u =ˆ ū. From classical adaptive control theory (see e.g., [2]) we know that the effect of the second line of (17) can be cancelled by selecting the parameter update laws (5) anḋ and considering the Lyapunov function candidate V (x,θ,ˆ ) = , the time derivative of which along the trajectories of (17) satisfieṡ Note that the perturbation term Δ bˆ ūx depends onū explicitly, which means that we cannot dominate this term by simply adding damping terms toū, as doing this also alters the perturbation term itself. Instead, we need to make Δ bˆ ūx nonpositive by designinḡ u and selecting b . Considerū as a feedback control law with a nonpositive nonlinear gain, that is where 19), (20), and noting that Δ bˆ ūx ≤ 0 yieldṡ Exploiting the same stability argument as before, boundedness of the system trajectories and convergence of x to zero follows. Remark 5: This example highlights the flexibility of the congelation of variables method: The congealed parameter (·) can be selected according to the specific usage. It can be a nominal value for robust design, or an "extreme" value to create sign definiteness, as long as the resulting perturbation Δ (·) is considered consistently. One can even make (·) a time-varying parameter subject to some of the assumptions used in the literature (e.g.,˙ (·) ∈ L ∞ ,˙ (·) ∈ L 1 , see e.g., [10], [15]), and use the congelation of variables method to relax these assumptions. This is the reason why the proposed method is named "congelation" 3 not "freeze." Remark 6: Similarly to the effect of the selection of θ in Remark 4, the selection of b makesˆ − 1 b a passivating input/output signal. In addition, note that the overall system is passive from −Δ bˆ κx to x (see Fig. 2). Our selection of b always guarantees that −Δ bˆ κ is negative and therefore yields a negative feedback "control" (if regarding −Δ bˆ κx as the control law), which is well known to possess an arbitrarily large gain margin in a passive system and robust against the variation of Δ bˆ κ.
The examples discussed above are simple, yet illustrate the core ideas put forward in the article. No matter if the timevarying parameters appear in the feedback path or in the input path. The readers will see that the following sections are essentially applying the same ideas in more sophisticated ways and to more complicated scenarios.

C. Preliminary Result: State-Feedback Design for Unmatched Parameters
In the examples of Section II-A and II-B the unknown parameter θ(t) enters the system dynamics from the same integrator from which the input u enters, that is, the so-called matching condition holds. For a more general class of systems in which the unknown parameters are separated from the input by integrators, adaptive backstepping design [2] is needed. Consider an n-dimensional nonlinear system in the so-called parametric strict-feedback form, namelẏ . .
One can easily see that if φ i (0) = 0, φ i (0)θ(t) becomes an unknown time-varying disturbance, yielding a disturbance rejection/attenuation problem not discussed here. By Hadamard's lemma [30], one can express the regressors as We directly give the results below and omit the step-by-step procedures. 4 For each step i, i = 1, . . . , n, define the error variables the new regressor vectors the tuning functions and the virtual control laws where c i > 0 are constant feedback gains, ζ i (x i , θ) are nonlinear feedback gains to be defined later, Γ θ = Γ θ 0 is the adaptation gain, κ(x,θ) is a positive nonlinear feedback gain to be defined later, and similar to the one in Section II-B. To proceed with the analysis, select the control law and the parameter update laws as respectively, and consider the Lyapunov function candidate Remark 8: Recalling Remark 7 and implementing (23) to (29) recursively, it is not hard to see that Note also that theθ-dependent change of coordinates between z i and x i is smooth, invertible, and x i = 0 ⇔ z i = 0, thus we can directly express w i as w i = W i (x i ,θ)z i with W i smooth and, similarly, ψ as ψ =ψ (x,θ)z withψ smooth.
The last two lines of (33) are eliminated by the parameter update laws (31) and (32), and the nonpositivity of Δ bˆ ᾱ n z n can be established in the same way as in Section II-B, thanks to the form, 5 ofᾱ n . The rest of the problem is to determine the nonlinear damping gains ζ i (x i ,θ) and κ(x,θ) to dominate the Δ θ -terms.
Proposition 1: Consider system (22) and the control law (30) with the nonlinear damping gains with c n > 0 and (·) > 0, and the parameter update laws (31) and (32) with sgn(ˆ (0)) = sgn(b). Then all closed-loop signals are bounded and lim t→+∞ x(t) = 0. Proof: Recalling Remark 8 and invoking Young's inequality yields Consider now (33) and note that there exists b , such that Δ bˆ ᾱ n z n ≤ 0 provided that sgn(ˆ (0)) = sgn(b), which yieldṡ Substituting (36) and (37) together with (38) and (39) which guarantees that z, θ, andˆ are globally uniformly bounded. The global uniform boundedness of x is also guaranteed by Remark 8. Note that the exogenous input signals to the dynamics of z are θ(t) and b(t), which are bounded by Assumption 1 and Assumption 3, and thereforeż is also bounded. Hence invoking Barbalat's lemma one can conclude that lim t→+∞ z(t) = 0, which further indicates that lim t→+∞ x(t) = 0, by Remark 8.
Although state-feedback is in general not available in practice, the result presented above indicates how to combine the congelation of variables method and backstepping to cope with the unmatched time-varying parameters. We will see that, once proper filters are built, the same techniques can also be applied to systems in which only the output is available for feedback.

III. OUTPUT-FEEDBACK DESIGN
Consider now an n-dimensional system in output feedback form with relative degree ρ described as follows: . . .

A. System Reparameterization
Due to the presence of unmeasured state variables we use Kreisselmeier filters (K-filters) [32] to reparameterize the system with the filter state variables (which are known) into a new form that is favorable for the adaptive backstepping design [2]. The filters are given as follows: where A k = S − ke 1 and k ∈ R n is the vector of filter gains. These filters are equivalent (see [2]) to the filterṡ where Define now the nonimplementable state estimatê The state estimation error dynamics are then described as follows:ε where ε = x −x. We now show that after using the K-filters (44)-(46) with the congelation of variables method the original n-dimensional system with time-varying parameters can be reparameterized as a ρ-dimensional system with constant parameters θ and some auxiliary systems to be defined. The substitution of θ for θ(t) preventsθ from appearing in the ε-dynamics. For ρ > 1 one has the problem described aṡ and, for ρ = 1 one haṡ where Similarly to the classical adaptive backstepping scheme we consider the ρth order system (53) [or (54) if ρ = 1] to exploit its lower triangular form yet (53) and (54) are useful only if the estimation error ε 2 is converging to 0. In classical schemes this is not a problem since there are no Δ a (t) or Δ b (t) terms and ε converges to 0 exponentially provided that A k is Hurwitz. The effect of Δ a (t) can be dominated via a strengthened damping design, as proposed in [17]. However, the dominance method cannot be directly applied to (52) since Δ b (t) is coupled with the input u, which causes a chicken-and-egg dilemma if we add additional damping terms to the controller without further modifications. To this end, in the next section we revisit the ideas of [19] and [20] to see how we can decouple Δ b (t) and u with the help of the inverse dynamics of system (41).

B. Inverse Dynamics
To study the inverse dynamics of (41) pretend that the system is "driven" by y, φ 0,i (y), φ i (y), and their time derivatives. Then one could write + φ 0,1 ) . . .

9: end while
Since it is difficult to use backstepping techniques to establish stability, or convergence, properties for the time derivatives of y or y i , we need to perform a change of coordinates to remove the derivative terms from the inverse dynamics. Note that for any pair of smooth signals s 1 (t) and s 2 (t) the equation holds. With this fact, the change of coordinatē which does not contain time derivatives of y and y i . In the same spirit, applying the change of coordinates specified by Algorithm 1, we are able to remove the terms containing the time derivatives of y and y i in each equation of the inverse dynamics. The resulting inverse dynamics in the new coordinates (we usex i , i = ρ + 1, . . . , n with a slight abuse of notation) are described as follows:

Remark 10:
The time-varying vectors bx y (t), bx φ,0,i (t), bx φ,i,j (t), and the time-varying scalars a u g y (j) (t), a u g y (j) i (t) are unknown as they depend on the unknown θ(t). However, as a consequence of Assumption 2, they are bounded.
Remark 11: Assumption 4 is verified ifx = 0 is a globally exponentially stable equilibrium of the zero dynamics described byẋ = Ab(t)x, see e.g., Lemma 4.6 in [33]. Some works (e.g., [11] and [16]) exploit this exponential stability property as a substitute for the classical minimum-phase assumption. Note, finally, that Assumption 4 is not more restrictive than the classical minimum-phase assumption because for time-invariant systems Assumption 4 reduces to minimum-phaseness.

C. Filter Design
Consider now the state estimation error dynamics (52) with u g given by (62), which yieldṡ Similarly to what is done in Section III-B, we need to use a change of coordinates to remove the time derivative terms brought by u g . Implementing a change of coordinates in the same spirit of Algorithm 1, the state estimation error dynamics in the new coordinatesε are described as follows: The time derivative terms are injected into the εdynamics via the vector of gains Δ b (t). Similarly to Remark 10, the time-varying vectorsΔ b (t), bε y (t), bε φ,0,i (t), bε φ,i,j (t) are unknown, yet bounded, due to Assumption 2. We will see that as long as these parameters are bounded they do not affect the controller design. In particular, are all identically 0 andε = ε, which yieldsε = A k ε + Φ (y)Δ a , a simplified case that has been dealt with in [17].
Similarly to the description of the ISS inverse dynamics, we want the state estimation error dynamics to be ISS, but in this case, rather than assuming it, we can guarantee such a property by designing the K-filters.
Remark 13: In practice Qε is tuned to achieve better filtering performance rather than computed analytically. This is feasible since there exist (·) for any bounded δ (·) , such that Qε can be set to an arbitrary positive multiple of I, due to (67). Moreover, (·) and δ (·) do not affect the controller design, as the σ(·)-related terms in (69) are dominated adaptively as shown in the subsection that follows. In this sense, neither (·) nor δ (·) are implemented or need to be known.

D. Controller Design
In Sections III-B and III-C we have established the ISS of the inverse dynamics and the state estimation error dynamics. However, before proceeding to design the controller, we have to consider (53) in the new coordinates. Note that ε 2 can be written as . Two special cases, in which either ρ = 1 or ρ ≥ 2 and b m is constant, and therefore a ε 2 φ,0,i (t) = 0, for all t ≥ 0, have been discussed in [19]. In general, a ε 2 y (1) (t) = 0 and, as a result, ε 2 containṡ y. Substituting (73) into the first equation of (53) yields bm , we can write the dynamics of y aṡ Observe that the effect of the a ε 2 y (1) (t)ẏ term is to bring the timevarying parameters back to the dynamics of y, which requires the congelation of variables method again. To do this, we need first to augment system (53) with the ξ, Ξ and v-dynamics, which are not needed in the classical constant parameter scenarios but necessary in the current setup. It turns out that the extended system is in the so-called parametric block-strict-feedback form [2], described by the equationṡ In these equations, (76) and (77) (77), and the zero dynamics of (78) before developing the backstepping design. For the subsystems described by (76) and (77) we have the following result. Lemma 1: Let the filter gain k be as in Proposition 2. Then system (76) is ISS with respect to the inputs y, φ 0,i (y) and system (77) is ISS with respect to the inputs φ i,j (y), where i = 1, . . . , n, j = 1, . . . , q. Moreover, there exist two ISS Lyapunov functions V ξ = |ξ| 2 P ξ , V Ξ = tr(ΞP Ξ Ξ ), with P ξ = P Ξ = γεPε 0, such that the time derivative of V ξ along the trajectories of (76) satisfieṡ V ξ ≤ − |ξ| 2 + σ ξy y 2 + σ ξφ 0 |φ 0 (y)| 2 (80) and the time derivative of V Ξ along the trajectories of (77) satisfiesV for some constant σ (·) > 0. Proof: Noting (66) and the fact that P ξ = γεPε yields DefineQ ξ = γεPεQεPε 0, take the time derivative of V ξ = |ξ| 2 P ξ along the trajectories of (77), and invoke Young's inequality to obtaiṅ Similarly, we take the time derivative of V Ξ = tr(ΞP Ξ Ξ ) along the trajectories of (77), which yieldṡ where P Ξ = P ξ ,Q Ξ =Q ξ , and this completes the proof. The remaining work is to investigate if ISS holds for the inverse dynamics of (78). To do this, first let and then define the change of coordinates The inverse dynamics of (78) are then described bẏ v = A bv + gv(y, ξ, Ξ,ε 2 , t) where . Exploiting the flexibility of the congelation of variables method we can always select b to construct a Hurwitz A b , and therefore ISS of system (86) can be established as shown in the lemma that follows.
Having established the ISS properties of (76), (77), and the zero dynamics of (78), we proceed to the backstepping design on the chain of integrators (79). Define the error variables the tuning functions the virtual control laws with the control law and the parameter update lawṡ In the definition of κ, φ 0 (y),Φ(y) are defined, such that φ 0 (y) =φ 0 (y)y, Φ(y) = Φ(y)y, which is feasible due to Remark 9. Moreover, the initial value of the parameter estimates are selected, such thatˆ (0) > 0, ζ (·) (0) > 0.

Remark 14:
We use dynamically updated "estimates"ζ (·) as the coefficient of the additional damping terms due to the convention mentioned in Remark 3, since the required damping coefficients are in general hard to compute (this fact is indicated by the proof of Proposition 3 that follows) and not vanishing even when all system parameters are constant. Meanwhile, thanks to these adaptive damping terms, we do not need to know δ Δθ for a reason similar to what is explained in Remark 13.
Proof: We first analyze the backstepping error variables z i step by step.
Step 1. Consider the dynamics of z 1 , which are described bẏ Considering V z 1 = 1 2 z 2 1 and taking the time derivative of V z 1 along the trajectories of (105) yieldṡ Invoking Young's inequality several times yieldṡ consists of the remaining terms to be cancelled by the update law/tuning function design. Moreover, using the same argument as in Section II-B and Section II-C, we can show that −Δ b mˆ κz 2 1 ≤ 0, and therefore this term can be dropped hereafter.
Step 2, . . . , ρ. Consider the sum of the functions V z i = 1 2 z 2 i , i = 1, . . . , ρ, and take the time derivative of the sum along the trajectories of the system, which yields 7 Some technical details such as cancellations related to the tuning function design are omitted as they are already well known and not directly related to the modifications for parameter variations. The readers can refer to [2] for these details.
consists of the remaining terms to be cancelled by the update laws. Then considering the function and taking its time derivative along the system trajectories yieldṡ Consider now the Lyapunov function candidate are the scaling coefficients of the corresponding partial Lyapunov function candidate, and ζ y = σ zy + γ Vx σx y + γ Vε σε y + γ V ξ σ ξy + γ Vv σv y , are the required damping coefficients to be compensated byζ y ,ζ φ 0 , andζ Φ , respectively. Taking the time derivative of V along the trajectories of the system yieldṡ Hence z,x,ε, ξ, Ξ,v,θ,ˆ , andζ (·) are bounded, which completes the proof. We should not forget that the invariance-like proof of asymptotic output regulation requires the boundedness of ε. In Proposition 3 we have proved the boundedness ofε after the change of coordinates described by Algorithm 1. However, it is not easy to directly imply the boundedness of ε since Algorithm 1 involves the time derivatives of y, φ 0,i (y), and φ i,j (y), i = 1, . . . , n, j = 1, . . . , q, the boundedness of which is difficult to conclude. Recall that these time derivatives are present because u has to be decomposed at the design stage with the help of the inverse dynamics. Now that we have completed the design, it is more convenient to directly use the boundedness of u for concluding the boundedness of ε, provided that we can first prove the boundedness of λ, as shown in what follows.
Theorem 1: Consider the system described by (76)-(79) and the adaptive controller described by (90)-(104). Suppose the same assumptions hold as in Proposition 3. Then, all closed-loop signals are bounded and lim t→+∞ y(t) = 0, that is, asymptotic output regulation to 0 is achieved.
Proof: We can directly conclude the boundedness of ξ, Ξ,θ, , andζ (·) from Proposition 3, and therefore, in this proof we only need to establish the boundedness of λ, ε, and x. First recall (50), which yields ⎡ where " * " represents terms that are not important. Note that v 0,2 = λ 2 is bounded due to the boundedness ofv and thus λ 2 is also bounded. Note that by Vieta's formula and since A k is Hurwitz, tr(A k ) < 0. Hence k 1 > 0. Consider the dynamics of the first state variable of the input filter (46), that is,λ 1 = −k 1 λ 1 + λ 2 with a bounded input λ 2 . Thus λ 1 is also bounded due to the bounded-input bounded-output property.
Sincev, λ 1 , and z 1 (or y) are bounded, we can conclude that λ 3 , . . . , λ m+1 are bounded by exploiting the lower-triangular structure of the matrix in (115). Since λ 1 , . . . , λ m+1 are bounded, by Proposition 3, α 1 is bounded. Note that z 2 is bounded, thus v m,2 is bounded, which further guarantees the boundedness of λ m+2 due to (115). In the same spirit, the boundedness of λ m+3 , . . . , λ n can be established in a recursive way similar to [2], which proves that λ is bounded. The boundedness of λ and Proposition 3 yields the boundedness of α ρ and v m,ρ+1 , and therefore the boundedness of u, which further proves the boundedness of ε along with the boundedness of y due to (52). Since x = ξ + Ω θ + ε, we can conclude that x is also bounded. Finally, consider (112) and note thatż is also bounded due to the boundedness of the parameters and all other closed-loop state variables, hence Barbalat's lemma yields lim t→+∞ z(t) = 0, and also lim t→+∞ y(t) = 0, which completes the proof.
Remark 15: Using the fact that lim t→+∞ z(t) = 0 we can proceed to prove the convergence of ξ, Ξ, λ, ε and x to 0 by exploiting the converging-input converging-output property of the corresponding subsystems or the dependency on converging signals.
Remark 16: We do not implement any projection operation in the parameter update laws and therefore the proposed method cannot guarantee boundedness of the parameter estimates in the presence of noise or additive disturbances. The reason for this is that we intend to present a plain scheme that precisely and concisely shows the spirit of the congelation of variables method, which exploits nonlinear damping and small-gain-like design to cope with time-varying parameters, instead of exploiting robust modifications to the update laws, as done in some of the aforementioned works. This, however, does not mean that the proposed method is not compatible with such robust modifications. In fact one can replace the classical adaptive backstepping procedure with the adaptive backstepping with projection method [34] to guarantee boundedness of the parameter estimates. This, however, is not pursued in the article.

IV. SIMULATIONS
To compare the proposed controller with the classical adaptive controller, consider the nonlinear system described as follows: where the time-varying parameters are defined by b 1 (t) = 1 + 0.2 sin(5t), b 0 (t) = 6 + sin(20t)  parameters solely used in Controller 2, set γ (·) = 1, (·) = 1, δ Δ bm = 0.2, Δθ δ Δθ = 1 (note that one does not need to know δ Δθ as mentioned in Remark 14), and set the initial conditions tô ζ y (0) = 2,ζ Φ (0) = 1 (nonzero initial conditions provide additional damping from the beginning to counteract the parameter variations). The initial condition for the system state is set to x(0) = [1, 0, 0] . Two scenarios are explored: In the first scenario, each controller is applied to a separate yet identical system while the state-dependent time-varying parameters of both systems are generated by the closed-loop system controlled by Controller 1, and the second scenario has the same setting as the first scenario except that the state-dependent time-varying parameters are generated by the closed-loop system controlled by Controller 2. In both scenarios, "baseline" results are the responses of the closed-loop system with constant nominal parameters controlled by Controller 1, which demonstrate the performance of the classical controller in the case of constant parameters. The responses of the system state variables in each scenario are plotted in Figs. 3 and 4, respectively, and the parameters used in each scenario are shown in Figs. 5 and 6, respectively. These results show that the proposed controller (Controller 2) outperforms the classical controller (Controller 1) in the presence of time-varying parameters and effectively prevents the oscillations caused by parameter variations.

V. CONCLUSION
This article discusses a new adaptive control scheme to cope with time-varying parameters based on the so-called congelation of variables method. Several examples with full-state feedback, including scalar systems with time-varying parameters in the feedback path and in the input path, n-dimensional systems with unmatched time-varying parameters, to illustrate the preliminary results, are considered. The output regulation problem for a more general class of nonlinear systems, to which the previous results are not directly applicable due to the coupling between the input and the time-varying perturbation, is then discussed. To solve this problem, ISS of the inverse dynamics, a counterpart of minimum-phaseness in classical adaptive control schemes, is exploited to convert the coupling between the input and the time-varying perturbation into the coupling between the output and the time-varying perturbation. A set of K-filters that guarantee ISS state estimation error dynamics are also designed to replace the unmeasured state variables. Finally, a controller with adaptively updated damping terms is designed to guarantee convergence of the output to 0 and boundedness of all closedloop signals, via a small-gain-like analysis. The simulation results show performance improvement resulting from the use of the proposed controller compared with the classical adaptive controller in the presence of time-varying parameters, even if the time-varying parameters are both unknown and persistently varying.
In future work the knowledge of the internal model of timevarying parameters can be exploited to avoid an overconservative controller design.