A Note on Complexity Measures for Probabilistic P Systems

. In this paper we present a ﬁrst approach to the deﬁnition of different entropy measures for probabilistic P systems in order to obtain some quantitative parameters showing how complex the evolution of a P system is. To achieve this, we deﬁne two possible measures, the ﬁrst one to reﬂect the entropy of the P system considered as the state space of possible computations, and the second one to reﬂect the change of the P system as it evolves.


Introduction
In [3] Gh. Pȃun introduced a new computing device, called P system, within the framework of Natural Computing based upon the observation that the processes which take place in the complex structure of a living cell can be considered as computations.Since their definition, these complex systems were investigated from many points of view.For instance, they were shown to be able to solve hard problems in lower time than classical devices.Starting from an initial state, they evolve along the execution by means of rules that can modify the objects inside the system and their own structure.
If we look at P systems as probabilistic devices where several different computations can be reached from a given initial state, it is intuitively obvious that some uncertainty in the result provided by the P system arises.In order to define some quantitative property allowing us to measure this uncertainty, we begin with some general observations about the notion of a system.
A common-sense definition for the notion of a system is a group of units so combined as to form a whole and to operate in unison.There are several other definitions in the literature.For example, a technical definition was presented by Hall and Fagan [1]: A system is a set of objects together with relationships between the objects.Formalists present a very simple and general definition: given a family of base sets X 1 , . . ., X n , then a system S is any relation (subset) in X = Π n i=1 X i .Constructivists disagree with this formalist point of view, emphasizing that the natural world of evolving systems can never be captured by formal systems with an a priori fixed and finite amount of base sets; hence, they propose open systems which define their elements and base sets during the processes of their evolution.
Nevertheless, there are some common characteristics in all above approaches: • A variety of distinct entities.
• These entities are involved in some kind of relations.
• These relations are sufficient to generate a new entity in a higher complexity hierarchy.
In the last 20 years, the study of complex systems has emerged as a recognized field in its own right, although a good definition of what a complex system is has eluded rigorous formulation.Attempts to formalize the concept of complexity go back even further, to Shannon's Information Theory [5], where the concept of entropy and information of a system (as a measure of uncertainty and its variety) was defined.

Entropy and Information
The state space, S, of a system is the set of all possible states that the system can be in.An essential component in the study of a system is a quantitative measure for the size of its state space.This measure is usually called variety.This measure represents the freedom of the system to reach a particular state, and thus the uncertainty we have about which state the system is in.The variety, V , is defined as the number of elements in the state space or, more commonly, as V = log 2 (|S|), if we encode the states of the system by bits.A variety of one bit, V = 1, means that the system has two possible states, that is, one choice.In the case of n binary variables, V = log 2 (2 n ) = n is therefore equal to the number of independent choices.
A system's variety, V , measures the number of possible states the system can exhibit, and corresponds to the number of independent binary variables.But in general, the variables used to describe a system are neither binary nor independent.If the actual variety of states that the system can exhibit is smaller than the variety of states we can potentially conceive, then we say that the system is constrained, what means that the system cannot make a full use of the available freedom, because some internal or external laws don't allow certain combinations of values for the variables.Constraint reduces our uncertainty about the system's state.
Hence, the constraint, C, of the system can be defined as the difference between maximal and actual variety: The variety and constraint can be generalized to a probabilistic framework, where they are replaced respectively by entropy and information.
Let us suppose that we do not know the precise state, s, of a system, but only the probability distribution, P (s), of the system to be in state s.The generalization of the variety of the system can then be expressed as entropy H: H reaches its maximum value if all states are equiprobable, that is, if we have no information to assume that one state is more probable than another state (and, in this case, entropy reduces to variety).Like variety, H expresses our uncertainty or ignorance about the system's state.The following result, where we obtain maximal certainty (or complete information) about the state of the system, is straightforward.
Lemma 2.1 The entropy vanishes, i.e., H = 0, if and only if there exists a state, s ∈ S, such that P (s) = 1.
As we have seen, the constraint reduces the uncertainty, that is, the difference between maximal and actual uncertainty.This difference can also be interpreted in a different way, as information, and historically H was introduced by Shannon [5] as a measure of the capacity for information transmission of a communication channel.If we achieve information about the state of the system, then this information will reduce our uncertainty about the system's state, by excluding (or reducing the probability of) a number of states.The information I we receive from this achievement is equal to the amount of uncertainty that is reduced, that is, the difference between the previous knowledge about the system and the later one, Although Shannon disagreed with the use of the term information to describe this measure, his theory came to be known as Information Theory.From then, entropy has been used as a measure for a number of higher-order relational concepts, including complexity and organization, and it has been applied in several knowledge fields (as biology, ecology, psychology, sociology, and economics) where the use of complex systems is unavoidable.

Measuring the Entropy of Probabilistic P Systems
At this point there is no doubt that a probabilistic P system, PPS for short (no mind the way to define the probability controlling the evolution of the system) can be a complex system with the possibility to evolve along the time.Hence, all above parameters can be applied to measure the complexity of this kind of systems.
Irrespective which is the chosen possibility to introduce probabilities in membrane system (see [2] for a detailed study), if we look at the computations the system generates, the result is a tree with a probability measure over its nodes.
Let us remember that over a rooted tree we can define a direct relation between adjacent nodes.With respect to this relation, we denote, for every node x of the tree, by Ch(x) the set of its children (that could be empty if x is a leaf; we denote by L the set of leaves of the tree).Definition 3.1 Given a rooted tree G, with root r, we say that a function P : V (G) → [0, 1] is a probability function over G if the following conditions are verified: In this context, a PPS generates a rooted tree with a probability associated with the nodes labelled by configurations reachable in the computation, where the root of the tree is the initial configuration of the system, and the probability of every node is the probability to reach it from the previous configuration; it will be denoted by T(Π) (or briefly by T if there is no confusion).The set of maximal branches of T (the computations of the P system) will be denoted by Comp(Π).If C ∈ Comp(Π), then we denote by C i the i-th configuration of the computation C, therefore C 0 is the initial configuration of the P system (see [4] for a formal definition of these concepts).
Because the P system is a dynamic system where the probability of different evolutions depends on the actual state of the device, in order to capture this evolution, and reflect the instant entropy of it, we propose two different entropies, one of them a global one, and one of them a dynamic one.
In the case of the global entropy, we consider the set of possible computations of the PPS as the state space, that is, we study the different final states the P system can reach in its whole execution.
To define the entropy for the global case, we need to define a probability measure over Comp(Π).We achieve this by using the probability defined over the PPS: Definition 3.2 Given a PPS, (Π, P ), we define a new probability measure over Comp(Π), that will be denoted with the same symbol, as: According to the previous definition of entropy and the probability defined in Comp(Π), the global entropy of the PPS is the following one.

Definition 3.3
The global entropy of a PPS, (Π, P ), is defined as: Note 3.2 Since lim x→0 x • log(x) = 0, we consider 0 • log(0) = 0. Note 3.3 This quantity measures the uncertainty of the P system Π to evolve along the possible computations.Note that, if the P system has only one possible computation, then it will have the probability equal to 1, hence, from lemma 2.1, the entropy of the system will be 0; that is, there is no uncertainty in the evolution of the system.But, is there any way to define a measure reflecting the instant entropy of the system?Next, we will try to define a more complex measure for the entropy of the PPS, where in some way the instant of the execution of it will be captured.But, previously we need some definitions over trees.
In a rooted tree, G, we can consider a depth function defined recursively as follows: where for every node x not being the root, let us denote by f (x) the father of x in G.The total depth of G is defined as If x ∈ G, then we define the path to x as the set of nodes of G connecting the root of the tree with the node x (there is only one set verifying this condition, because G is a tree), and it will be denoted by γ x .
If (G, P ) is a tree with a probability function defined over its nodes, by using the paths defined above, we can define a new probability function (derived from P ) over the levels of G. Lemma 3.1 Let (G, P ) a tree with a probability function defined over its nodes.Then, for every n ∈ N the following function is a probability function over G n : ∀ x ∈ G n , P n (x) = y∈γx P (y) If (Π, P ) is a PPS, then T is a rooted tree, so we can apply all above definitions, obtaining the n-level entropy of the system.Definition 3.4 For every n ∈ N, the n-level entropy of a PPS, (Π, P ), is defined as: Now we can define the amount of information of the P system Π along its execution.
Definition 3.5 The sequence of information of the PPS Π is defined as 4 Conclusions

Note 3 . 1
It is easy to check that above definition is, in fact, a probability measure over Comp(Π).

HNote 3 . 4 Lemma 3 . 2
n (Π) = − C∈Tn P n (C) • log(P n (C))The sequence {H n (Π)} n∈N gives a dynamic measure of the evolution of the PPS Π that approximates the global entropy above defined.For every PPS, Π, it is verified that {H n } is a monotonic growing sequence and: lim n→∞ H n (Π) = H g (Π)