The ESS for evolutionary matrix games under time constraints and its relationship with the asymptotically stable rest point of the replicator dynamics

Recently we interpreted the notion of ESS for matrix games under time constraints and investigated the corresponding state in the polymorphic situation. Now we give two further static (monomorphic) characterizations which are the appropriate analogues of those known for classical evolutionary matrix games. Namely, it is verified that an ESS can be described as a neighbourhood invader strategy independently of the dimension of the strategy space in our non-linear situation too, that is, a strategy is an ESS if and only if it is able to invade and completely replace any monomorphic population which totally consists of individuals following a strategy close to the ESS. With the neighbourhood invader property at hand, we establish a dynamic characterization under the replicator dynamics in two dimensions which corresponds to the strong stability concept for classical evolutionary matrix games. Besides, in some special cases, we also prove the stability of the corresponding rest point in higher dimensions.


Introduction
In ecology, the number of individuals ready to interact with other conspecifics they encounter is less than the total number of individuals in the population. This is natural since other activities such as handling the prey (Holling 1959;Garay and Móri 2010), recovering from an injury (Garay et al. 2015;Sirot 2000) decrease the number of individuals able to interact in the consumer/predator population. Moreover, the time B Tamás Varga vargata@math.u-szeged.hu Extended author information available on the last page of the article necessary for the previous activities can depend on the strategy of the given individual resulting in other evolutionary outcomes (Křivan and Cressman 2017, Sections 3.1.1-2 and 3.2.1-2; Garay et al. 2017, Section 4;Garay et al. 2018, Example 1) 1 to those in the classical setup of Maynard Smith (Maynard Smith 1982) in which the effect of time constraints can be considered as a constant and so a negligible factor in the payoff function [see the remark after (2.2)]. Also, other theories such as optimal foraging theory (Charnov 1976;Garay et al. 2012) or ecological games (Broom and Ruxton 1998;Broom et al. 2008Broom et al. , 2009Broom et al. , 2010Broom and Rychtář 2013;Garay et al. 2015) consider the substantial effect of time constraints on the expected evolutionary outcomes.
We recently developed a one species matrix game under time constraints and interpreted the monomorphic evolutionarily stable strategy (ESS) (Garay et al. 2017). This static approach is essentially used in this article too. An ESS is a strategy with a fitness greater than that of any other (mutant) strategy appearing in a sufficiently small amount in the population of individuals following the ESS (Maynard Smith and Price 1973;Maynard Smith 1974, 1982Taylor and Jonker 1978;Bomze and Weibull 1995;Balkenborg and Schlag 2001;Garay et al. 2017). It is intuitively a possible description of a strategy which is able to resist the invasion of mutants. Therefore if the members of a population follow an ESS then the population is stable, and it always returns to its original state after small perturbations.
The other solution concept of evolutionary game theory corresponds to an equilibrium point of an evolutionary dynamics which models how the frequencies of the different types in the population vary in time (Taylor and Jonker 1978;Hofbauer et al. 1979;Hines 1980;Zeeman 1981;Akin 1982;Sigmund 1988, 1998;Cressman 1992). In the case of the standard replicator dynamics (Taylor and Jonker 1978), for example, the evolutionarily stable states are related to the asymptotically stable rest points of the dynamics.
It is known that the two solution concepts are connected with each other by the folk theorem of evolutionary game theory which says that an ESS corresponds to an asymptotically stable rest point of the replicator dynamics (Hofbauer et al. 1979;Zeeman 1980;Hofbauer and Sigmund 1998;Cressman 2003;Broom and Rychtář 2013). Moreover, this statement can be reversed in some sense (Cressman 1990(Cressman , 1992Hofbauer and Sigmund 1998): if a strategy is strongly stable, then it is an ESS. A strongly stable strategy is a mix (convex combination) of the strategies existing in the population to which the mean strategy of the population tends under some dynamics describing the evolution of the population.
In this article we continue the investigation begun in Garay et al. (2018) and seek an answer to what the relationship is between an ESS and the corresponding rest point 1 In Křivan and Cressman (2017) and Garay et al. (2017) there are examples for the Hawk-Dove game and the Prisoner's dilemma with distinct behaviors compared to the classical case. For example, if the cost of fighting is smaller than the value of the resource then the Hawk is the only equilibrium in the classical case. Contrary to this, if the Hawk-Hawk interaction is long enough compared to the other types of interactions then a mixed equilibrium also appears in addition to the pure Hawk equilibrium. Also, for instance, if the matrix of the classical Prisoner's dilemma is taken as the time constraint matrix then the cooperator strategy proves to be an ESS for an appropriate payoff matrix. In Garay et al. (2018), there is an example for a mixed ESS such that the corresponding interior state is a locally asymptotically stable rest point of the replicator dynamics but, contrary to the classical case, it is not globally stable. of the replicator dynamics in matrix games under time constraints. More precisely, we use the notion of uniformly evolutionarily stable strategy (UESS) instead of ESS. Although the equivalence of ESS and UESS is an open question in our case, Bomze and Weibull draw attention to the fact that the evolutionary stability of a strategy does not necessarily imply the asymptotic stability of the corresponding state in a more general context. To avoid this problem they propose the use of UESS (Bomze and Weibull 1995) and we follow them. This, however, does not mean any essential restriction in the sense that the two notions are equivalent for classical matrix games (when there are no time constraints).
First we give two further static characterizations of UESS (Theorems 3.1 and 3.2) showing that a UESS is a neighbourhood invader strategy (Apaloo 1997(Apaloo , 2006Apaloo et al. 2009) and vica versa. This is very important because this is not necessarily true for non-linear payoff functions in general (Apaloo 1997, p. 75). For classical matrix games the corresponding result makes it easier to verify the asymptotic stability of the corresponding rest point of the replicator dynamics. Using these characterizations we extend the main result of Garay et al. (2018) showing that, for two dimensional strategies, a strategy is a UESS if and only if the set of the corresponding states is asymptotically stable with respect to the replicator dynamics independently of how many two dimensional phenotypes are considered in the replicator dynamics. The proof, however, makes essential use of the fact that the strategy space is a one dimensional manifold. In higher dimensions, our proof does not work in general, though we can apply the characterizations in particular cases.

Time constraints to a matrix game played by a population
Consider a population in which every individual follows a strategy. A strategy is an element of the N − 1 dimensional simplex S N := q = (q 1 , q 2 , . . . , q N ) ∈ R N N i=1 q i = 1 and q i ≥ 0, i = 1, 2, . . . , N for some positive integer N . Sometimes we use the word "(pheno)type" instead of "strategy" and we say that an individual is a q (type) individual or a q strategist if it follows strategy q. Denote by e 1 , . . . , e N in turn the vertices of S N , that is, the vectors (1, 0, . . . , 0), (0, 1, 0, . . . , 0),. . ., (0, . . . , 0, 1). Then a strategy q = (q 1 , . . . , q N ) = i q i e i can be considered as the strategy which applies e i with probability q i in a game. The strategies e 1 , . . . , e N are called pure strategies while the other strategies in S N are called mixed strategies.
Consider a population of phenotypes p 1 , . . . , p n and let x i denote the proportion of individuals following strategy p i (x i ≥ 0, i = 1, 2 . . . , n, i x i = 1). It is assumed that, in a given moment, n is a finite positive integer unrelated to N . Also, our model here is a frequency dependent model (as in most works in the References): the fitness of a given phenotype is assumed to depend only on the frequencies of the different phenotypes in the population but not on the population size.
Every individual can be active or inactive. An active individual looks for an opponent. If it meets another active individual then they start to play but if it meets an inactive individual then there is no game and the active individual keeps on searching for an opponent. After starting a game, both participants become inactive (from the point of view of all other members of the population). That is, active means that the individual is ready to play while inactive means that the individual is not able to play.
If an individual plays the ith pure strategy and its opponent the jth pure strategy then its intake is a i j . The values a i j determine an N × N matrix A, the intake matrix, the i j entry of which is a i j . Accordingly, if one of the players follows strategy p and its opponent follows strategy q then the expected intake of the p individual is calculated as pAq.
We assume that the players need to wait some time depending on their strategies after every game. This waiting time can include regeneration, digesting, processing of the gained resource and so on, all activities which can be necessary for the player to get ready for a new game. The interaction itself needs no time, it is assumed to be instantaneous. If a player uses the ith pure strategy and its opponent the jth pure strategy, then the first player has to wait a time of length τ i j and its opponent has to wait a time τ ji . It is assumed that the waiting times are independent(!) exponential random variables with expected values τ i j and τ ji , respectively. The instantaneous interaction time and the independence of waiting times are a crucial distinction to the model of Křivan and Cressman (2017) in which the interaction itself has a time which is the same for the two opponents and there are no waiting times after the interaction.
The N × N non-negative matrix T determined by the expected values τ i j is called the time constraint matrix. Consequently, the expected waiting time of a p individual after a play against a q individual is calculated as pT q.
As mentioned, if the other individual is inactive then there is no interaction and the waiting time after an encounter of this kind is considered to be 0.
The duration which passes from the end of a wait until finding an (active or inactive) individual is also considered to be an exponential random variable and its expected length is taken as the unit of time for our model, so its expected length is 1. This explains why it is important to consider a waiting time of 0 length if the searching individual find an inactive "opponent", for the searching time with expected length 1 always restarts after waiting even if the expected duration of waiting is 0. Consequently, the larger proportion of the population is inactive, the longer time is necessary to find an active opponent.
In summary, the life of an individual essentially consists of the alternation of searching and waiting (see Fig. 1). Between two consecutive waits the individual expectedly spends 1 unit of time searching and after finding an individual of phenotype q the p individual expectedly spends time pT q waiting if the opponent is active and time 0 waiting if the opponent is inactive.
Following Garay et al. (2017) we interpret the fitness of an individual as an intake rate which is calculated as the expected intake per the expected time 2 . In this article we Fig. 1 The alternation of active and inactive states in the life of an individual. A node corresponds to an encounter. A loop corresponds to a wait after encountering an active opponent. A node without a loop corresponds to encountering an inactive opponent. A segment between two adjacent nodes corresponds to a search. A search, the node at the right end of the segment representing the search, and the loop sitting on the node (even if the length of the loop is 0) corresponds to an activity cycle. The expected times of the waits and the expected intakes in the encounters are also shown only give a heuristic argument to the calculation of the fitness. For more details we refer the interested reader to Garay et al. (2017) in which a Markov model corresponding to the above described model has been built which mathematically validates the way we calculate the fitness.
Let us calculate the fitness of an individual of strategy p in our population of p 1 , . . . , p n individuals. Consider a huge population which is well mixed and in stationary state. Being in a stationary state means that the proportions of the active and the inactive individuals, respectively, in the population do not vary in time unless the composition of the population varies. Their proportions only depend on the composition of the population. That is, the proportion of active individuals corresponds to the proportion of the expected time an individual spends in the active state during its life. This is the reason why we define the proportion of active individuals of strategy p i as the unique solution in [0, 1] of the equation system (see Garay et al. 2017, Lemma 2): and denote it by i = i (x) = p i (x, p 1 , . . . , p n ). Analogously to this, we can define for an arbitrary strategy p distinct from p 1 , . . . , p n as follows: This essentially gives the fitness of a p individual in a population in which the frequency of p individuals is 0. Call a part of a life from the beginning of a search until the end of the wait after the search (even if the wait takes no time) an activity cycle. Then the numerator in the previous formula is just the expected length of the active state in an activity cycle while the denominator is just the expected duration of an activity cycle. Indeed, every activity cycle includes an active period (the search) which has 1 unit of time length. In addition to this time, in the x j j proportion of activity cycles comes the (expected) time p i T p j necessary for the wait after an interaction with an active p j individual while in the x j (1 − j ) proportion of activity cycles comes no further time because the found p j individual is inactive. Altogether the expected length of an activity cycle is p i T j x j j p j . Similarly, the expected intake of a p individual during an activity cycle is pA n i=1 x i i p i . Since we are interested in the long term success of an individual, as mentioned above, we measure the fitness as the amount of the expected intake in an activity cycle during the expected time of an activity cycle, formally by the quotient We denote this amount by W p (x) = W p (x, p 1 , . . . , p n ). We remark that if all entries of T are the same constant, say τ , then 1 + pT n j=1 x j j p j = 1 + τ j,k x j j p k j for any p ∈ S N where p k j is the kth coordinate of p j . Consequently, i = l =: for any i, l ∈ {1, 2, . . . , n}. Therefore, W p > W q iff pA i x i p i > q A i x i p i , that is, the relationship of the fitness of different strategies only depends on the intake. This shows that our model includes the classical matrix games (when there are no time constraints) introduced by Maynard Smith and Price (1973); Maynard Smith (1974Smith ( , 1982).
From an evolutionary perspective, we are interested in the "uninvadable" strategies. We use two approaches: monomorphic and polymorphic ones as follows.

Monomorphic approach
In this approach, we fix a strategy p and investigate what occurs if some mutants of the same strategy q = p appear in a population consisting of p type individuals where q runs over the strategy space S N , that is, there can be at most one type of mutants in the population at any moment. One can think of strategy p as the strategy of the resident individuals. However, it is important to emphasize that the situation when there is more than one type of mutant in the population is not considered to be monomorphic any longer in this article even if the proportion of all mutants is so small that the population could be considered to be monomorphic from some practical view.
Assume that the proportion of the mutant individuals is ε. Denote by ρ p = ρ p (p, q, ε) and ρ q = ρ q (p, q, ε) the proportion of active individuals in the subpopulation of p type and q type individuals, respectively. According to (2.1) they are calculated as the unique solution pair in [0, 1] of the equation system: and The fitness of a p type and a q type individual, respectively, is denoted by ω p = ω p (p, q, ε) and ω q = ω q (p, q, ε), respectively. 3 For the limit case ε = 0, that is, if the phenotype of every individual is p we use the notation ρ(p) and ω(p), respectively. It is clear that ρ(p) = ρ p (p, p, ε) = ρ p (p, q, 0) and ω(p) = ω p (p, p, ε) = ω p (p, q, 0) for every ε ∈ [0, 1] and q ∈ S N , respectively. Furthermore, ρ(p) is the unique solution in [0, 1] of the equation ρ = 1/(1 + pT ρp), that is, ρ(p) = ( √ 4pT p + 1 − 1)/(2pT p). We define the "uninvadability" mimicking Taylor and Jonker (1978): Definition 2.1 A strategy p ∈ S N is called a uniformly evolutionarily stable strategy of the matrix game under time constraints (UESS for short) if there is an ε 0 > 0 (independent of q) such that the inequality holds for all possible mutant strategies q = p whenever 0 < ε ≤ ε 0 .
Letω := (1−ε)ω p +εω q . It is clear that (2.4) is equivalent to either of the following inequalities: (2.4 ) The adverb "uniformly" refers to the fact that ε 0 is chosen independently of q. So its role is analogous to that in "uniformly continuous" or in "uniformly convergent". We remark that Bomze applies the "uninvadable strategy" terminology for UESS Pötscher 1989, 1993;Bomze and Weibull 1995). Also, we mention that the above wording shows that a UESS is a singleton pointwise or uniform evolutionarily stable set according to the definition of Balkenborg and Schlag (2001).
If ε 0 is allowed to be chosen depending on q we come to the definition of evolutionarily stable strategy of the matrix game under time constraints which was used in our previous work (Garay et al. 2018) and we get back Taylor and Jonker's wording. It is not too difficult to see that for N = 2 the two notions are equivalent, whereas in higher dimensions, the equivalence is not known in our setup. It would be no surprise if there were no equivalence since, although the equivalence holds for classical matrix games, the equivalence does not necessarily hold for a more general payoff function which is non-linear in at least one of its variables (Vickers and Cannings 1987;Bomze and Pötscher 1993;Bomze and Weibull 1995, Section 3).
As in classical matrix games, every ESS satisfies a weaker stability condition. If we let ε tend to zero we can immediately see that every ESS is a Nash equilibrium.

Definition 2.2 Strategy p is a Nash equilibrium of the matrix game under time constraints (NE for brevity), if for all
If a strict inequality holds in the previous inequality for every q = p then p is said to be a strict Nash equilibrium.
Consider a totally mixed NE p = i p i e i (i.e. p i > 0 for every i = 1, . . . , N ) for a matrix game under time constraints. Recall the following fact for matrix games: a totally mixed strategy p is a Nash equilibrium for the matrix game, if and only if pAp = e i Ap (1 ≤ i ≤ n) (Hofbauer and Sigmund 1998, p. 63). Does a similar statement for matrix games under time constraint hold?
Let supp(p) be the set {i | p i > 0}, that is, supp(p) denotes the set of indices i for which p i is not zero.

Lemma 2.3 (Lemma on neutrality)
For strategy p, the following three conditions are equivalent: The proof of the lemma can be found in "Appendix A.1".

Remark 2.4
From the previous lemma it immediately follows that a strict NE is always a pure strategy (Garay et al. 2018, Theorem 4.1). Another consequence is that a totally mixed NE p can be calculated as the solution of the system Remark 2.5 From (2.4), by continuity, it is easy to see if p is a strict NE, then p is an ESS. Moreover, p is a UESS as we see later.

Polymorphic approach
In this approach, there are finitely many, say n, fixed phenotypes which could be present in the population with positive frequencies. Besides these other phenotype can not appear in the population. Let p 1 , p 2 , . . . , p n ∈ S N be the admissible phenotypes. The frequency distribution x = (x 1 , . . . , x n ) ∈ S n of the admissible phenotypes is called the state of the population. We investigate how the state of the population varies with time. Sometimes, there is a distinguished type which can be considered as the resident phenotype. We use the notation p * for the distinguished type. Then, the possible number of the phenotypes in the population is n + 1 and phenotypes p 1 , . . . , p n can be considered as the mutant phenotypes. Generally, it is clear from the context whether we use a distinguished type or not, therefore, we do not emphasize this fact separately.
We denote by i = i (x) ( * = * (x)) the proportion of active individuals among the p i (p * ) type individuals. We calculate i and * as in (2.1), for example, for * we have to denote the proportion of active individuals in the whole population and the proportion of active individuals in the subpopulation of mutants phenotypes, respectively. Furthermore, W i (x) (W * (x)) denotes the fitness of a p i (p * ) type individual and they are defined as in (2.3), that is, for instance, We also need to denote the mean fitness of the population and the mean fitness of the mutant subpopulation for which we use the notationsW andW , respectively. That is, There is an important relationship between the polymorphic and monomorphic approach. We introduce the mean strategies: give the stationary distribution for the monomorphic model (in particular, ρh (x)

Remark 2.7
It is easy to see that if there is no distinguished type the previous assertion says that the polymorphic population corresponds to a monomorphic population consisting of onlyh(x) type individuals in the sense that ρ(h(x)) =¯ (x) and Proposition 2.6 and the previous remark gives the monomorphic population which corresponds to a polymorphic population. Observe that the relationship is not so trivial as for classical matrix games whenh(x) is simply equal to i x i p i . It is also clear that for classical matrix games if given a monomorphic population in which every individual follows the same strategy p * = i α i p i then the polymorphic population consisting of p 1 , . . . , p n individuals with frequencies α 1 , . . . , α n corresponds to the monomorphic population of p * phenotypes. In general, the following holds.
Proposition 2.8 (Garay et al. 2018, p. 9-10) Let p * be the convex combination of the strategies p 1 , . . . , p n with coefficients α 1 , . . . , α n . Define ρ(p * ) as the unique solution of the equation Finally, we recall the replicator dynamicṡ with respect to the phenotypes p 1 , . . . , p n which describes a dynamical model of the polymorphic population. We remark when using the replicator dynamics we follow the usual practice of the coevolutionary literature. Namely, it is presumed that the time scale of the evolution is much slower than that of the level of the individual interactions (cf. Ginzburg 1983;Roughgarden 1983;Křivan and Cressman 2017). Therefore, it can be assumed that the population is in a stationary state on the level of evolutionary effects so we can use the fitness defined in (2.3) in the replicator dynamics.
It is known for classical matrix games that if the possible phenotypes are just the pure strategies e 1 , . . . , e N , that is, n = N and p is an asymptotically stable rest point of the replicator dynamics (Hofbauer et al. 1979, Theorem 1 in Zeeman 1980, Theorem 7.2.4 in Hofbauer and Sigmund 1998. We investigate, what the situation is under time constraints. Unfortunately, the general case remains open. However, for two dimensions, we give the complete characterization of the asymptotic stability by the help of the notion ESS extending Theorem 4.2 in Garay et al. (2018).

Main results
In the first two theorems we give two monomorphic characterizations of the notion UESS. They are the extensions of Hofbauer and Sigmund (1998, Theorem 6.4.1) which state that a strategy p is an ESS with respect to the matrix game with matrix A if and only if there is a δ > 0 such that pAq > q Aq whenever 0 < ||p − q|| < δ. It is important to note that δ is independent of q which, taking the next theorems into account, shows that UESS and ESS are equivalent to each other in the case of classical matrix games. The proofs can be found in "Appendix A.2". Theorem 3.1 A strategy p is a UESS if and only if there exists a δ > 0 such that for any q with 0 < ||q − p|| < δ and for any ε ∈ (0, 1] we have In other words, for these q-s the frequency of q type individuals can be arbitrarily close to 1, moreover, equal to 1, still, the average intake per time of a p type individual is strictly greater than that of the population average. Consequently, if an individual of p type appears in a population consisting only of q type strategists, then strategy p will successfully invade and spread.
There is still a difficulty in checking whether a strategy is UESS or not. Even if we have a suitable candidate for δ we have to check inequality (3.1) for every ε ∈ (0, 1]. Fortunately, the following observation solves this problem too.

Theorem 3.2 A strategy p is a UESS if and only if there is a δ > 0 such that
Note that ε = 1 intuitively means that a single p individual appears in an infinitely large population consisting of only q individuals.
A strategy satisfying the conditions described in the previous statements are well known in the literature and often used as the definition of (U)ESS. Maynard Smith's original wording for ESS itself (Maynard Smith and Price 1973;Maynard Smith 1974, 1982 is also very similar to that in Theorem 3.2. Thomas (1985) simply calls ESS a strategy satisfying the conditions in Theorem 3.1. Bomze and his colleagues use the terminology strongly uninvadable strategy for a strategy described in Theorem 3.2 (Bomze and Pötscher 1989;Bomze and Weibull 1995).
In other works (Maynard Smith and Price 1973;Maynard Smith 1974, 1982Bomze and Pötscher 1989;Bomze and Weibull 1995;Balkenborg and Schlag 2001;Hofbauer and Sigmund 1998) the authors consider payoff functions linear in its first variable and smooth enough otherwise. In this case, there is a variety of equivalent wordings of the notion of ESS. However, the appearance of non-linearity can stop the equivalences between the different definitions resulting in newer notions. This is highlighted by Apaloo's terminology when he emphasizes the "invader" feature of a strategy satisfying the conditions of either of the previous two statements (Apaloo 1997(Apaloo , 2006Apaloo et al. 2009). Let E(r, p, q, ε) be the payoff of an r individual in a population consisting of individuals a proportion (1−ε) of which follows strategy p and a proportion ε of which follows strategy q. Call a strategy p * a locally strict neighbourhood invader strategy (NIS) if there is a δ > 0 such that E(p * , q, p * , 0) > E(q, q, p * , 0) whenever 0 < ||p * −q|| < δ. He reveals that the characterizations in Theorems 3.1 and 3.2 are not necessarily true for general (non-linear) payoff functions (Apaloo 1997, p. 75). This shows the significance of Theorem 3.2 from another aspect. Namely, despite the non-linearity of ω in its variables a strategy p * is a UESS of a matrix game under time constraints if and only if it is a local strict NIS.
For classical matrix games it is known that a state corresponding to an ESS is an asymptotically stable rest point of the replicator dynamics with respect to the pure strategies (Hofbauer and Sigmund 1998, Theorem 7.2.4), moreover, a strategy is an ESS if and only if it is strongly stable (Hofbauer and Sigmund 1998, Theorem 7.3.2). The notion of strong stability was introduced and investigated in detail by Cressman Cressman (1990, 1992. A strategy p * ∈ S N is strongly stable if whenever p * is the convex combination of some (finitely many) strategies p 1 , . . . , p n ∈ S N , then the mean strategy i x i p i of a population consisting of p 1 , . . . , p n individuals with frequencies x 1 , . . . , x n tends to p * under the replicator dynamics with respect to the strategies p 1 , . . . , p n . (Note that n can differ from N .) The validity of this relationship in the more general frame of matrix games under time constraints is in question. Namely, when p * is a convex combination of p 1 , . . . , p n we do not know whether the set of states x withh(x) = p * is always stable or if there is a counterexample. We have remarked at the end of Introduction in Garay et al. (2018) that we conjecture the latter. Nevertheless, for two dimensions, it has been proved that if p i = e i ∈ S 2 (i = 1, 2) then the state x withh(x) = p * is a locally asymptotically stable rest point of the replicator dynamics if and only if p * is a UESS (Garay et al. 2018, Theorem 4.2). Now, using Theorem 3.2 we extend this result for the case when there are finitely many types in the population and p * is a convex combination of them. This result implies that so as to find a counterexample one needs to investigate games with at least three pure strategies but to calculate such a game is very difficult because we generally cannot explicitly express the solution of equation system (2.1). On the other hand, since the relationship between x and the corresponding strategyh(x) (see Remark 2.7) is not so straightforward as in the case of the classical matrix games we conjecture that, in suitable cases, this can cause a distortion to an extent which is enough to ensure the instability of a state corresponding to a UESS with respect to the replicator dynamics.
Theorem 3.3 Let p 1 , . . . , p n ∈ S 2 and p * be a convex combination of them. If p * is a UESS then the set G of states x withh(x) = p * is locally asymptotically stable under the replicator dynamics with respect to p 1 , . . . , p n .
The locally asymptotic stability of G means that for every ε > 0 there is a δ > 0 such that if The converse is also true except to the case where p * = p i 0 = p 1 i 0 e 1 + p 2 i 0 e 2 for some i 0 with 0 < p 1 i 0 < 1 and p 1 i 0 is the smallest or the greatest among the first coordinates of p 1 , . . . , p n . If, for example, p 1 i 0 is the smallest one we cannot infer the direction of the inequality (2.4) from the asymptotic stability for a q strategy with q 1 < p 1 i 0 . We therefore word the opposite direction separately.
Theorems 3.3 and 3.4 together essentially means that a strategy p * ∈ S 2 is a UESS if and only if it is strongly stable. The theorems are proved in "Appendix A.2".
Example To illustrate Theorem 3.3 consider the following example. The time constraint matrix and the payoff matrix are defined as The phase portrait of the replicator dynamics with respect to the strategies p 1 = (1, 0), p 2 = (0, 1) and p 3 = (1/3, 2/3) corresponding to the matrix games under time constraints with matrices (3.2). The phase space of the dynamics is S 3 . The red (dotted) segment between r 1 and r 2 corresponds to the ESS (1 − √ 2/2, √ 2/2), the green (dashed) segment between g 1 and g 2 corresponds to the NE (1/2, 1/2). The red segment and the state (1, 0, 0) (corresponding to the strict NE p 1 ) are asymptotically stable while the green segment is instable. Note that though the two segments can seem to be parallel, they are not parallel, only their slopes are close It was shown in Garay et al. (2018) that (1, 0) is a strict NE, (1/2, 1/2) is a NE and (1 − √ 2/2, √ 2/2) is an (mixed) ESS.

Some further results
Here we describe some further results which show that the folk theorem of evolutionary game theory (Hofbauer and Sigmund 1998, Theorem 7.2.4) also remains true in higher dimensions in some special cases. The proofs can be found in "Appendix A.3".
It is true that every UESS is a NE. We show that every strict NE is a UESS. This observation provides an opportunity for a slight extension of Theorem 4.1 in Garay et al. (2018), which says that a state corresponding to a strict NE is an asymptotically stable rest point of the replicator dynamics.

Theorem 4.1 If p is a strict NE then p is a UESS.
A strict NE is always a pure strategy (see Remark 2.4), say, e 1 = (1, 0, . . . , 0) ∈ S N and the corresponding state is asymptotically stable (Garay et al. 2018, Theorem 4.1). By Proposition 2.8, the state corresponding to e 1 is itself in the polymorhic population of the phenotypes e 1 , . . . , e N . The key observation to see that the state (1, 0, . . . , 0) ∈ S N is asymptotically stable is the verification of the inequalities W 1 (x) > W i (x), i = 2, 3, . . . , N in a neighbourhood of (1, 0, . . . , 0). An immediate consequence is W 1 (x) >W (x). Recall the notation xW(x) for the scalar product x i W i . Then W 1 (x) >W (x) is just e 1 W(x) > xW(x). The state which satisfies an inequality like this is called a polymorphic stable state.

Definition 4.2 The statex ∈ S n is called a polymorphic stable state
From a biological perspective it means that a subpopulation in PSSx has a higher mean fitness than the whole population and it is therefore expected that the state of the population evolves into the PSSx. Indeed, following the proof of Hofbauer and Sigmund (1998), Theorem 7.2.4 one can easily see that a PSS is always an asymptotically stable rest point of the standard replicator dynamics. 6 Accordingly, a state corresponding to a strict NE is asymptotically stable and to validate this it is enough to have e 1 W(x) > xW(x) rather than W 1 (x) > W i (x), i = 2, 3, . . . , N . This observation sheds light on the importance of the next theorem. Lemma 4.3 Assume that p * is a UESS. Then there is a δ > 0 such that if p * is not in the convex hull of p 1 , . . . , p n then W * (x) >W (x) whenever 0 < x 1 + · · · + x n < δ.

Remark 4.4
Note that δ is independent of the strategies p 1 , . . . , p n and the integer n! Corollary 4.5 Assume that p * is a UESS and not in the convex hull of p 1 , . . . , p n then (1, 0, . . . , 0) ∈ S n+1 is a PSS with respect to the polymorphic population of phenotypes p * , p 1 , . . . , p n . Now, we start to analyze the equilibrium states of the replicator dynamics from a game theoretical point of view. The next lemma summarizes the relationship between the equilibrium points of a matrix game and the corresponding replicator dynamics (2.10).
If, for somex ∈ S n holds thenx is an equilibrium point of the replicator dynamics. (b) Ifx ∈ intS n is a rest point of the replicator dynamics (2.10), then strategy p * defined as in a) is a NE. (c) Ifx ∈ S n is a stable rest point of the dynamics, then strategy p * defined as in a) is a NE. (d) Assume that the singleton {x} ⊂ S n is the ω-limit of an orbit x(t) running in intS n . Then p * is a Nash equilibrium.
One can ask whether we can state more in the previous lemmas. Is it not possible that every NE corresponds to a(n asymptotically) stable rest point? Is it not true that every rest point corresponds to a NE? The answer is negative, and this is already the fact for classical matrix games too as suggested by Exercises 7.2.2 and 7.2.3 in Hofbauer and Sigmund (1998).
If we would like to get asymptotic stability we have to assume stronger conditions. As mentioned, for classical matrix games it is known that the corresponding state of an ESS is asymptotically stable (Hofbauer and Sigmund 1998, Theorem 7.2.4) and this assertion can also be reversed in some sense (Hofbauer and Sigmund 1998, Chapter 7.3). We investigate therefore the implication of a state corresponding to a UESS.
The most simple case is when there is a UESS (p * ) among the existing phenotypes (p * , p 1 , . . . , p n ) and the UESS is not in the convex hull of the other strategies.
Theorem 4.7 Assume that p * is a UESS and not in the convex hull of p 1 , . . . , p n , that is, there is no (α 1 , . . . , α n ) ∈ S n with α i p i = p * . Then (1, 0, . . . , 0) ∈ S n+1 is a locally asymptotically stable rest point of the replicator dynamics belonging to the polymorphic population of phenotypes p * , p 1 , . . . , p n .
With Theorem 4.1 in hand we immediately conclude Corollary 4.8 (Garay et al. 2018, Theorem 4.1) If p * is a strict NE, then the corresponding state is a locally asymptotically stable rest point of the replicator dynamics with respect to the pure phenotypes e 1 , . . . , e N .
It is important to emphasize that p * is outside of the convex hull of the other strategies, otherwise, by Proposition 2.8, there are other stable rest points in any neighbourhood of (1, 0, . . . , 0). However, the stability of (1, 0, . . . , 0) remains true.
Theorem 4.9 Assume that p * is a UESS and consider the population of phenotypes p * , p 1 , . . . , p n . Then (1, 0, . . . , 0) ∈ S n+1 is a locally stable rest point of the replicator dynamics with respect to the polymorphic population of phenotypes p * , p 1 , . . . , p n .
The theorem states new information about the case when p * is in the convex hull of p 1 , . . . , p n .

Discussion
Interactions between individuals often require times varying with the strategies of the participants. During this time the individuals are unavailable for other interactions dividing the population into an active (ready to interact) part and an inactive (unable to interact) part. Considering the different time demand of distinct strategies can lead to the distinction between the proportions of active individuals in the subpopulations of different strategies. This fact is often neglected in classical evolutionary and economical game theory assuming tacitly that the time demands are all equal. However, other models draw attention to the importance of activity-dependent time constraints. Holling's functional response (Holling 1959) takes into account that the number of active predators in a given moment is less than their total number since some of them are just handling the prey or digesting. Also optimal foraging theory (Charnov 1976;Garay et al. 2012Garay et al. , 2015 or ecological games describing kleptoparasitism (Broom and Ruxton 1998;Broom et al. 2008Broom et al. , 2009Broom et al. , 2010Broom and Rychtář 2013) all show the significant effect of time constraints on the optimal behaviour.
Following the examples just mentioned we also incorporated time constraints into matrix games bringing the model closer to ecological reality. As a consequence, the calculations have become much more involved. We have investigated whether some phenomena known for classical matrix games remain true or not. A central notion is the (U)ESS. Two further monomorphic characterizations of that have been given, namely, Theorems 3.1 and 3.2 which show the equivalence between the notion of neighbourhood invader strategy (by Apaloo) and the notion of UESS with respect to our payoff function which is non-linear in each of its variables.
Applying the new characterizations the extension of the folk theorem of evolutionary game theory has been continued. The corresponding state of a UESS is asymptotically stable in a polymorhic population in which one of the phenotype is the UESS while the convex hull of the other phenotypes does not contain the UESS (this covers the case of strict NE too). We have also seen that the state is stable if the other type can mix the UESS. Moreover, for two dimensional multiplayer games, Theorems 3.3 and 3.4 show that asymptotic stability of the set of states corresponding to a strategy are equivalent to the strategy being a UESS. Although, in higher dimensions, the relationship remains open, our results indicate that finding a counterexample, if there exists one at all, is not simple.
In this part we cite or claim some technical statements for the convenience of the reader and prove the new assertions appeared in the previous sections.

Proof of Lemma 2.3 (i)⇔(ii) Consider a NE p. By (2.5), we have
Multiplying by p i we get Since p = i∈supp(p) p i e i it follows that if we take the sum of the previous inequalities for i ∈ supp(p) we obtain that i∈supp(p) that is, the sum of the left-hand sides of the inequalities is equal to the sum of the right-hand sides of the inequalities. This is possibile only if there is an equality in (2.5) for every i ∈ supp(p). Now assume that (2.6) holds and q is an arbitrary strategy. It is clear that (2.6) is equivalent to the inequalities Multiply by q i the ith inequality in (A.13). Taking the sum from i = 1 to i = N we get that which is equivalent to with equality if supp(q) ⊂ supp(p).
This means that (2.5) holds for every q ∈ S N . Therefore p is a NE.
Lemma A.1 (Garay et al. 2017, Lemma 2) The following system of nonlinear equations in n variables, where the coefficients c i j are positive numbers, has a unique solution in the unite hypercube [0, 1] n . Garay et al. (2017), Lemma 2 c i j is assumed to be positive for every i, j but considering the proof a bit further one can see the validity of the lemma with non-negative c i j -s too.

Remark A.2 In
As claimed by the next lemma, the solution of the previous equation system varies continuously with the coefficients. Obviously, if there exists only two phenotypes with positive frequency then the average strategy of the active subpopulation is a convex combination of the two phenotypes. Conversely, if a strategy is a convex combination of the two phenotypes, does there exist a composition which mix the strategy? The next lemma gives the precise answer.

A.2 Proof of the main theorems
Now, we are ready to prove the two important static characterizations of the notion of UESS.

Proof of Theorem 3.2
The necessity is clear. The problematic direction is the sufficiency. Assume that (3.1) holds with ε = 1 for any u with 0 < ||u − p|| < δ and let q be an arbitrary element of S n with 0 < ||q − p|| < δ. We validate that, for any ε ∈ (0, 1], (3.1) holds. For ε = 1, this is just the assumption. For 0 < ε < 1, let By Lemma A.5, r(ε) is on the segment matching p and q such that On the other hand, by Proposition 2.6, the mean fitness of the population consisting of p and q individuals with proportions (1−ε) and ε, respectively, corresponds to the fitness of a population consisting of only r(ε) individuals which can be viewed as a population consisting of p and r(ε) individuals with proportions 0 and 1, respectively. Considering Proposition 2.6, the last interpretation also shows that the fitness of a p individual in a population consisting of only r(ε) individuals is equal to that in population of the p, q individuals. Formally, this means thatω(p, q, ε) = ω r(ε) (p, r(ε), 1). and ω p (p, r(ε), 1) = ω p (p, q, ε). By assumption, ω r(ε) (p, r(ε), 1) < ω p (p, r(ε), 1) because ||r(ε) − p|| < δ. We immediately conclude thatω(p, q, ε) < ω p (p, q, ε) which can be possible if and only if ω q (p, q, ε) < ω p (p, q, ε).
We prove that ||p * −h(x)|| > ||p * −h(y)||. Suppose the contrary, that is, there are states x, y satisfying the previous conditions (i)-(iv), but with ||p * −h(x)|| ≤ ||p * −h(y)||. Consider a state z 0 ∈ S n with z 0 Now take S n (z 1 , . . . , z n ) = z = z 0 and start to increase the first coordinate of z in the following way: • z 1 cannot be greater, then x 1 ; • as z 1 is increasing z k+1 , . . . , z n is decreasing but z i cannot be less, than x i if k + 1 ≤ i ≤ n, say, first we decrease z n until z n = x n then we decrease z n−1 until z n−1 = x n−1 and so on; • if for some 0 < z 1 ≤ x 1 we have that ||p * −h(z)|| = ||p * −h(y)||, then we stop; • if for every 0 < z 1 ≤ x 1 we have that ||p * −h(z)|| > ||p * −h(y)||, then we set z 1 to be x 1 and start to increase z 2 repeating the process replacing the index 1 with index 2; • if we do not find a z with ||p * −h(z)|| = ||p * −h(y)|| by moving z 2 , then we set z 2 to be x 2 and start to increase z 3 and so on. Ash is continuous in z and ||p * −h(x)|| ≤ ||p * −h(y)|| < ||p * −h(z 0 )|| we must find a z ∈ S n such that (i) Since p * 2 <h 2 (y) also holdsh(z) must be equal tō h(y). As¯ is the solution in [0, 1] of the equation¯ = 1/(1 +hT¯ h ) we also havē Consequently, we get that .19) can be continued as

Conclusion
In summary, if 0 < ||h(x) − p * || < δ and p * 2 <h 2 (x) it follows that for a positive x i the expression x i [W i (x) −W (x)] is strictly positive or strictly negative according as 1 ≤ i ≤ k or k + 1 ≤ i ≤ n (Observation 1). This means that if x i > 0 then x i strictly increases for 1 ≤ i ≤ k and x i strictly decreases for k + 1 ≤ i ≤ n, respectively, which, by Observation 2, implies thath 2 (x) has to strictly decrease until reaching p * 2 . Ifh 2 (x) < p * 2 then a similar argument shows thath 2 (x) tends to p * 2 strictly increasingly. Consequently,h(x) → p * in a monotone way.
By (A.20), it can be continued as < ω p (p,q, η) = ω p (p, q, ε), so by comparing the rightmost side with the leftmost one we get that ω q (p, q, ε) < ω p (p, q, ε) for every q = p and 0 ≤ ε ≤ ε 0 which proves that p is a UESS.
Sincex is an interior rest point the right hand side of the replicator dynamics can vanish if and only if W i (x) =W (x) for every i which, by Lemma 2.3, immediately implies that p * is a NE. (c)-(d) Sincex is also a rest point it follows that ifx i > 0 then must hold. Therefore, if, contrary to our claim, p * is not a NE then the equilibrium condition is hurt only for some i withx i = 0. For such an index i, say i 0 , there is an ε > 0 such that So, by continuity, W i 0 (x) −W (x) > ε/2 in a bounded neighbourhood H of x. Henceforth, the way of the proofs of c) and d) branch off: (c) Sincex is stable there is another neighbourhood H ofx such that for any solution starting from H remains in H forever. Take an arbitrary x ∈ H with x i 0 = 0 and consider this x as an initial value. Then contradicting that x(t) remains in H. (d) Sincex is the only member of the ω-limit of x(t) and x(t) is bounded it follows that x(t) →x, in particular, x i 0 (t) →x i 0 = 0. Hence, there is a t 0 such that x(t) ∈ H whenever t ≥ t 0 from which we infer that log x i 0 (t) → ∞ as in the proof of c) which contradicts x i 0 (t) tending to 0.
Proof of Theorem 4.7 By Corollary 4.5 the vector (1, 0, . . . , 0) ∈ S n+1 is a PSS of the replicator dynamics which is an asymptotically stable rest point of the replicator dynamics in accordance with the remark immediate after Definition 4.2.
In the next proof we again apply Theorem 3.2.