Forecasting the growth of complexity and change

In the spirit of punctuated equilibrium, complexity is quantified relatively in terms of the spacing between equally important evolutionary turning points (milestones). Thirteen data sets of such milestones, obtained from a variety of scientific sources, provide data on the most important complexity jumps between the big bang and today. Forecasts for future complexity jumps are obtained via exponential and logistic fits on the data. The quality of the fits and common sense dictate that the forecast by the logistic function should be retained. This forecast stipulates that we have already reached the maximum rate of growth for complexity, and that in the future, complexity’s rate of change (and the rate of change in our lives) will be declining. One corollary is that we are roughly halfway through the lifetime of the universe. Another result is that complexity’s rate of growth has built up to its present high level via seven evolutionary subprocesses, themselves amenable to logistic description. D 2002 Elsevier Science Inc. All rights reserved.


Introduction
Change has always been an integral feature of life. "You cannot step twice in the same river", said Heraclitus-who has been characterized as the first Western thinker-illustrating the reality of permanent change. Heraclitus invoked an incontrovertible law of nature according to which everything is mutable, "all is flux." In the physics tradition such laws are called universal laws, for example, the second law of thermodynamics, which stipulates that entropy always increases, and explains such things as why there can be no frictionless motion. In fact, there are theories that link the accumulation of complexity to the dissipation of entropy, or wasted heat.
The accelerating amount of change in technology, medicine, information exchange, and other social aspects of our life, is familiar to everyone. Progress-questionably linked to technological achievements-has been following progressively increasing growth rates. The exponential character of the growth pattern of change is not new. Whereas significant developments for mankind crowd together in recent history, they populate sparsely the immense stretches of time in the earlier world. The marvels we witnessed during the 20th century surpass what happened during the previous one thousand years, which in turn is more significant than what took place during the many thousands of years that humans lived in hunting-gathering societies. What is new is that we are now reaching a point of impasse, where change is becoming too rapid for us to follow. The amount of change we are presently confronted with is approaching the limit of the untenable. Many of us find it increasingly difficult to cope effectively with an environment that changes too rapidly.
What will happen if change continues at an accelerating rate? Is there a precise mathematical law that governs the evolution of change and complexity in the Universe? And if there is one, how universal is it? How long has it been in effect and how far in the future can we forecast it? If this law follows a simple exponential pattern, we are heading for an imminent singularity, namely the absurd situation where change appears faster than we can become aware of it. If the law is more of a natural-growth process (logistic pattern), then we cannot be very far from its inflection point, the maximum rate of change possible.

The Task
Change is linked to complexity. Complexity increases both when the rate of change increases and when the amount of things that are changing around us increase. Our task then becomes to quantify complexity, as it evolved over time, in an objective, scientific and therefore defensible way. Also to determine the law that best describes complexity's evolution over time, and then to forecast its future trajectory. This will throw light onto what one may reasonably expect as the future rate at which change will appear in society.
However, quantifying complexity is something easier said than done.

COMPLEXITY
We have seen much literature and extensive preoccupation of "hard" and "less hard" scientists with the subject of complexity. Yet we have neither a satisfactory definition for it, nor a practical way to measure it. The term complexity remains today vague and unscientific. In his best-selling book Out of Control Kevin Kelly concludes: [1] How do we know one thing or process is more complex than another? Is a cucumber more complex than a Cadillac? Is a meadow more complex than a mammal brain? Is a zebra more complex than a national economy? I am aware of three or four mathematical definitions for complexity, none of them broadly useful in answering the type of questions I just asked. We are so ignorant of complexity that we haven't yet asked the right question about what it is.
But let us look more closely at some of the things that we do know about complexity today:  It is generally accepted that complexity increases with evolution. This becomes obvious when we compare the structure of advanced creatures (animals, humans) to primitive life forms (worms, bacteria).  It is also known that evolutionary change is not gradual but proceeds by jerks. In 1972 Niles Eldredge and Stephen Jay Gould introduced the term "Punctuated Equilibria": long periods of changelessness or stasis-equilibrium-interrupted by sudden and dramatic brief periods of rapid change-punctuations. [2] These two facts taken together imply that complexity itself must grow in a stepladder fashion, at least on a macroscopic scale.
 Another thing we know is that complexity begets complexity. A complex organism creates a niche for more complexity around it; thus complexity is a positive feedback loop amplifying itself. In other words, complexity has the ability to "multiply" like a pair of rabbits in a meadow.  Complexity links to connectivity. A network's complexity increases as the number of connections between its nodes increases, and this enables the network to evolve. But you can have too much of a good thing. Beyond a certain level of linking density, continued connectivity decreases the adaptability of the system as a whole. Kauffman calls it "complexity catastrophe": an overly linked system is as debilitating as a mob of uncoordinated loners. [3] These two facts argue for a process similar to growth in competition. Complexity is endowed with a multiplication capability but its growth is capped and that necessitates some kind of a selection mechanism. Alternatively, the competitive nature of complexity's growth can be sought in its intimate relationship with evolution. One way or another, it is reasonable to expect that complexity follows logistic-growth patterns as it grows.

MILESTONES IN THE HISTORY OF THE COSMOS
The first thing that comes to mind when confronted with the image of stepwise growth for complexity over time is the major turning points in the history of evolution. Most teachers of biology, biochemistry, and geology at some time or another present to their students a list of major events in the history of life. The dates they mention invariably reflect milestones of punctuated equilibrium (or "punk eek" for short). Physicists tend to produce a different list of dates stretching over another time period with emphasis mostly on the early Universe.
Such lists constitute data sets that may be plagued by numerical uncertainties and personal biases depending on the investigator's knowledge and specialty. Nevertheless the events listed in them are "significant" because some investigator has singled them out as such among many others. Consequently they constitute milestones that can in principle be used for the study of complexity's evolution over time. However, in practice there are some formidable difficulties in producing a data set of turning points that cover the entire period of time (15 billion years).
I made the bold hypothesis that a law has been in effect from the very beginning. This was not an arbitrary decision on my part. The suggestion came when I first looked at an early compilation of milestones. In any case, I knew that confrontation with real data would be my final judge. More than once in this paper I have turned to the scientific method as defined by experimental physicists, namely: Following an observation (or hunch), make a hypothesis, and see if it can be verified by real data.

THE CHALLENGES
Here are the most challenging issues concerning this paper's methodology in order of decreasing importance, and the way they were dealt with: 1. The complexity associated with a milestone must be quantified at least in relative terms.
For example, how much complexity did the Cambrian explosion bring to the system compared to the amount of complexity added to the system when humans acquired speech?
To quantify the complexity associated with an evolutionary milestone we must look at the milestone's importance. Importance can be defined as equal to the change in complexity multiplied by the time duration to the next milestone. This definition has been derived in the classical physics tradition: you start with a magnitude (in our case Importance), you put an equal sign next to it, and then you proceed to list in the numerator whatever the quantity in question is proportional to, and in the denominator whatever it is inversely proportional to, keeping track of possible exponents and multiplicative constants. It is intuitively obvious that for a milestone Importance is linearly proportional to the amount of complexity added by the milestone, and also linearly proportional to how long the system survives unchanged following the milestone. The greater the complexity jump at a given milestone, or the longer the ensuing stasis, the greater the milestone's importance will be.
The complexity change associated with a certain milestone will then be inversely proportional to the time period to the next milestone. And to the extent that we are considering milestones of comparable importance, we have a means of quantitatively comparing the change in complexity associated with each jump.
Following each milestone the complexity of the system increases by certain amount. At the next milestone there is another increase in complexity. Assuming that milestones are approximately of equal importance, and according to the above definition of importance we can conclude that the increase in complexity C i associated with milestone i of importance I is where T i the time period between milestone i and milestone i+1.
We thus have a relative measure of the complexity contributed by each milestone to the system. If milestones become progressively crowded together with time, their complexity is expected to become progressively larger, see Figure 1. Figure 1. To the extent that milestones of equal importance appear more frequently, their respective complexity increases. The area of each rectangle represents importance and remains constant. The scales of both axes are linear.

The time frame is vast and the crowding of milestones in recent times is so dense that no
logistic or exponential function can be used to describe the growth process.
A logistic function does not necessarily need to be a function of time. Moreover, there are processes for which our Euclidean conception of time is not appropriate. For this analysis a bettersuited time variable is the sequential milestone number because this way we can handle the singularity as T0. Once forecasts are obtained for complexity jumps associated with future milestones we can use the definition of importance coupled with the equi-importance assumption to derive explicit dates for future milestones.

Milestones from different evolutionary processes (cosmological, geological, biological, etc.)
and by different authors (physicists, biologists, historians, etc.) need to be combined in a rigorous way. There is a need for normalization when authors furnish data sets with different numbers of milestones for the same chronological period.
The equi-importance assumption is key to dealing with both of these issues. If all milestones in a data set are equally important, then the corresponding complexity jumps-calculated as described in Challenge 1-are directly comparable no matter what evolutionary process they belong to. Similarly, if someone's data set contains more milestones that someone else's data set for the same chronological period, then the milestones in the former set must carry less importance than those in the latter. The data sets are normalized so that they give the same overall complexity contribution for the same time periods.

How many turning points should an adequate data set contain? One can always argue that a large number of important events have been neglected.
If we consider only the top most important milestones, we can invoke Pareto's rule-also known as the 80/20 rule-to argue that 20 percent of all milestones account for 80 percent of all complexity acquired during the time period in question. Moreover dealing with only major milestones improves the equi-importance requirement. Milestones of large importance are by definition milestones of comparable importance. Naturally some of them will be more important than others, but the average importance will be a relatively large number, and the spread around this average a relatively small number. Therefore, on a first approximation we can treat all milestones as being of equal importance. Remark: A milestones is assigned to a point in time, i.e. a date. If more than one event is associated with the same date, the milestone's importance reflects the sum total of the importance of all such events.

The Data
My first attempt to compile a set of milestones and determine a growth law from it turned out bittersweet. I analyzed 20 milestones compiled during a brainstorming session with colleagues. This early data set proved amenable to a description by a logistic curve, but the result was subsequently criticized on the ground that there could be bias in the choice of milestones. So I set out to find more objective data from independent and reliable sources in order to be able to defend them as unbiased.
Searching the Internet for something like "Major Events in the History of..." yields scores of pointers and chronologies so-called timelines. Many of them have to do with some classroom assignment. Some of them stand out in terms of completeness and credibility. I briefly present below six of the thirteen data sets I have retained. A complete list of the data used in the analysis, including milestone descriptions and dates, can be found in Appendix A.
 The Cosmic Calendar. Carl Sagan has put together a one-year calendar matching the entire history of the Universe, and pointing out dates of major events. [4] The set consists of 47 milestones that cover the entire time period (big bang to present) but suffer somewhat from the calendar format. Time resolution becomes insufficient for milestones that fall in the same time bucket. It happens with the calendar's monthly buckets, and again later with the buckets of seconds. In fact, it seems that during these periods of saturated time resolution Sagan is enumerating milestones on a bucket-by-bucket basis reporting on things that happened during the time bucket, as if he is driven by the structure of the time buckets instead of the spacing of the events. The data used in the analysis incorporate milestones from thirteen data sets, the last of which is the author's own. I decided to include a data set of my own for two reasons. First, I believe that having gone through all the research, I was well positioned to distill a rather complete, defensible, and scientific set of evolutionary milestones. Second, I needed data on the twentieth century, neglected by the other authors. From the 12 sets considered only Sagan's data set addresses the twentieth century, and his data are plagued by the calendar-format problem mentioned earlier.
From the 13 data sets only Paul Boyer's and mine were created in direct response to the question: Which are the 25 most significant milestones in the evolution of the Universe? The motivation of other authors, like Sagan and A.M.N.H., was to put events into a time perspective. But in so doing, they answered the same question simply by selecting what to list as major events.
Because of the different number of milestones between data sets, and the fact that different sets sometimes give different dates for the same event (e.g., the time of the big bang ranges from 13 to 20 billion years ago), I decided to derive a "canonical" set of milestones and use the spread between authors to calculate errors. My assumption was that there must be some coherence between the 13 data sets, i.e., many milestone dates must be common to most sets. Combining 13 data sets into one greatly reduces the uncertainties on the results. Figure 2 shows a histogram of all milestone dates (a total of 302) with logarithmically increasing time buckets as we go backward in time. This choice of binning the data is not arbitrary. It became obvious when I plotted the 302 points on a number of linear graphs with different-size time buckets each. The logarithmically increasing time buckets are chosen in such a way that each bucket receives one cluster of milestones. The peak of each cluster is used to define a date for a milestone of the canonical set used as time variable in our analysis. There are twenty-eight canonical milestones but because of complexity's definition (Equation 2) there only twenty-seven peaks in Figure 2.

THE CANONICAL SET OF MILESTONES
For each peak the average complexity change is calculated, as well as an error given by the spread around the peak (one standard deviation). For peaks featuring only one entry (for example, milestones during the last 100 years) I arbitrarily assign the average error as error. Fractional milestone numbers are assigned to all milestones according to their date. Figure 2. A histogram of all milestones with logarithmic time buckets. The thin black line is superimposed to outline the peaks that define the dates of the "canonical" milestones. On the horizontal axis we read the dates of these milestones..

The Analysis
A distribution of the change of complexity per milestone for all thirteen data sets is shown in Figure 3. The different data sets have been normalized for equal cumulative complexity contributions over identical time periods. Consequently the units of the vertical axis are arbitrary to an overall multiplicative constant. The picture comparing the normalized data for all thirteen sources is rather coherent as there is good agreement between the different data sets. Furthermore the data points generally line up on a straight line in a semi-log plot, which is the hallmark of exponential growth, or alternatively, the early part of logistic growth. The milestone-number axis marks the milestones of the canonical set. We can now proceed to fit the data with an exponential and a logistic function. Given that Figure 3 depicts complexity's rate of growth-i.e., complexity change per milestone-we expect the trend to follow the first derivative of the two functions. We therefore fit to the expressions: where M, , and x o constants (1+e -(X-Xo) )(1+e (X-Xo) ) and x the sequential milestone number. The logistic life cycle is the first derivative of the familiar logistic function:

Complexity per Milestone
M . 1+e -(X-Xo) Figure 4 shows the canonical set of milestones with an exponential and a logistic fit superimposed. The logistic fit is better than the exponential one, (70% confidence level compared to 30%). Table I shows the particular details of the fits. I have made an attempt to be scientifically correct. However, the reader should be aware that the Chi-square estimates (and the associated confidence levels) cannot reflect all uncertainties. There are sources of error that have not been properly accounted for. For example, errors due to having ln  ln  widely different dates for the same event (sometimes with good reason as the exact date is still being debated), or errors due to the approximation that the milestones are equally important. The mid point of the logistic function is milestone number 27.89, which corresponds to 10 years ago. In other words, complexity grew at the highest rate ever around 1990. From then onward complexity's rate of change began decreasing. Future milestones of comparable importance will henceforth be appearing less frequently.

Complexity per Milestone
But according to the exponential law, milestones punctuating complexity jumps will continue appearing closer together at the same exponential rate, and 25 years from now we should expect successive turning points of the same importance to be spaced only 5 days apart. Table II spells out the timing of future milestones as expected from the logistic and exponential growth laws determined by the above fits. The accuracy of the results, as reflected in the significant digits retained in the numbers reported, may seem overly optimistic. However, the reader should bear in mind two things. First, that the curves are extremely steep; on linear time scale they would appear practically horizontal across billions of early-Universe years. Second, the significant digits in the results reflect more the precision of the method and less the accuracy of the answers because not all systematic errors have been accounted for (see earlier remark on sources of unaccounted errors).

THE CLOSE-UP PICTURE
The case can be made, if less rigorously, for a finer structure in the evolution of the trajectory of complexity's change. It is has been shown that any growth processes may consist of smaller logistic sub-processes. [7] Looking at Figure 3 closely we can discern smaller S-shaped steps. Such structure indicates an alternation between periods when the milestones progressively crowd together and periods when they are roughly regularly spaced in time. This is largely due to the fact that as we move through time we encounter a number of rather well defined evolutionary sub processes. The thin black line in Figure 3 (representing the average change of complexity per milestone), suggests at least seven such sub processes. In figure 5 logistic curves are adapted to these segments.
The seven logistic curves do not result from rigorous fits to the data because of too few milestones and too much jitter on the data points in each segment (otherwise said, too large errors for the fitting procedure to work). The thick gray lines are logistic functions drawn in to simply guide the eye. However, the fair agreement between thick lines and the corresponding sections of the dotted line is evidence that we are dealing with rather independent natural-growth processes. In order to better understand the seven sub processes, Table III lists the relevant parameters for each process. The mathematical parameters of the logistic functions being of less interest, it is preferable to give the dates corresponding to the 10%, 50%, and 90% penetration level for each process. The range 10%-90% of a logistic growth process is traditionally taken as the period of main thrust toward higher growth. Above the 90% level one can argue that a stable maximum level has been reached. The names given to the seven phases have been inspired by what happened during each sub process. Consequently, "Cosmic" refers to the process around the formation of our galaxy. "Geological" refers to early forms of life and is centered on the appearance of multicellular life. "Hominization" is the period between the divergence of orangutan from Hominidae and the development of speech; it is centered on the appearance of first bipedalism and stone tools. "Homo sapiens" is a relatively short period dominated by Homo sapiens and the domestication of fire. "Modern human" extends between the first burial of the dead and the invention of agriculture; it is centered around the time of rock art, and includes ritual/spiritual behavior (magic shamanism). "Civilization" is a name inspired by city dwelling and religion becoming important; it is centered around the appearance of writing and the wheel. Finally, "Scientific" is the growth phase that begins with renaissance, and ends with modern physics; it is centered on the industrial revolution, and the establishment of scientific method.

Discussion of Results
This paper studies the evolution of complexity from the beginning of the Universe to present day. The hypothesis, verified via a successful logistic fit on data, is that a simple diffusion law has been governing complexity's growth across divers evolutionary processes (cosmological, geological, biological, etc.). We are obviously concerned with an anthropic Universe here since we are overlooking how complexity has been evolving in other parts of the Universe. Still, the author believes that such an analysis carries more weight than just the elegance and simplicity of its formulation. John Wheeler has argued that the very validity of the laws of physics depends on the existence of consciousness. 2 In a way, the human point of view is all that counts! The work reported here links logistic growth and complexity in two different ways. One way is how complexity has been accumulating in the Universe along a large logistic curve (Figure 4). Another way is how complexity's rate of growth has been following smaller logistic curves in the close-up picture of Figure 5. There is a fundamental difference between these two pictures. The former involves an S-shaped pattern fitted to the amount of change accumulated whereas the latter involves fitting S-shaped patterns to the rate of change. In both cases evidence for logistic growth argues for natural growth in competition (Darwinian in nature), but the interpretations are different.

SEEING COMPLEXITY AS A COMPETITIVE GROWTH PROCESS
2 John Wheeler is professor at Princeton University and presently director of the Center for Theoretical Physics at the University of Texas, Austin.
Observation of logistic growth enables one to argue for the existence of Darwinian competition. Such competition implies that:  Some "species" is capable of growing via multiplication.  Members of the "species" compete for a limited resource.  There is natural selection.
In the logistic function of Figure 4 the "species" is the system's complexity and its members are the complexity chunks carried by the milestones. The limited resource is the system's cumulated final complexity. It is limited because too much complexity may hurt survival as per Kauffman's argument for complexity catastrophe mentioned earlier.
In the logistic functions of Figure 5 the "species" is the speed with which each evolutionary sub process proceeds, and its members are the jumps in speed during the rapid-growth phase (when turning points appear progressively more frequently). The limited resource is maximum speed, characteristic of the evolutionary sub process in question (e.g., geological evolution reached higher levels of complexity per milestone than cosmic evolution).
There is selection everywhere. Changes, be it in complexity, or in the rate of growth of complexity, are like mutants; only the best-fit ones survive. Potential changes lurk around like potential accidents, waiting for the opportunity to become realized. If a change represents too large or too small a step for the moment in history of the evolutionary process it belongs to, it will not survive (i.e., it will not become realized). At the same time, if the system's cumulated complexity approaches saturation-some billion years from now-changes, and the evolutionary sub process they belong to, will have to be confined to miniscule sizes. Big mutations at that point in time will simply have no chance of being realized.

THE ULTIMATE S-CURVE
The large-scale logistic description of Figure 4 indicates that the evolution of complexity in the Universe has been following a logistic growth pattern from the very beginning, i.e. from the big bang. This is remarkable considering the vastness of the time scale, and also the fact that complexity resulted from very different evolutionary processes, for example, planetary, biological, social, and technological. The fitted logistic curve has its inflection point-the time of the highest rate of change-around 1990. Considering the symmetry of the logistic-growth pattern, we can thus conclude that the end of the Universe is roughly another 15 billion years away. Such a conclusion is not really at odds with the latest scientific thinking that places the end of the solar system some 5 billion years from now.
The ultimate S-curve of Figure 4 is not a function of time but of milestone number. The Sshaped pattern would be rather distorted if we plotted complexity as a function of time (very flat for billions of years and very steep at present). But the forecasts of complexity per future milestone can be translated to complexity per future date according to Equation (2). We therefore see from Table  II that the next three milestones are due in 38, 45, and 69 years from now. To give some perspective we can look at the last three milestones:  5 years ago: Internet / human genome sequenced  50 years ago: DNA / transistor / nuclear energy  100 years ago: modern physics (radio, electricity, etc.) / automobile / airplane In other words, dates for world-shaking milestone like the above three should be expected around 2038, and then again around 2083 and 2152.

INDEPENDENT CORROBORATION
During this paper's reviewing process one of the reviewers brought to my attention that Richard Coren has done a similar analysis on a set of 13 events he described as "critical transitions in evolution on Earth" in his book The Evolutionary Trajectory. [8] Coren looked at evolution in terms of information transfer, much like J. M. Smith and E. Szathmary did with their small set of six transitions. [9] I could not resist trying my approach on Coren's data set.
The logistic fit turned out excellent with a mid point around 1860 A.D. I consider this to be in exceptionally good agreement with my result (1990 A.D.) given that Coren's sampling is much coarser; his data set has less than half the data points I have in my canonical set. Data sets with few points, when individually analyzed, generally gave much poorer agreement. Moreover, in view of the earlier discussion on unaccounted systematic errors, such agreement must be considered as fortuitous. Nevertheless it brings certain corroboration.
For the sake of completeness Coren's data set, my logistic fit to it, and the corresponding graph are given in Appendix B.

OTHER INSIGHTS
According to the classification of Table III events like the Cambrian explosion are not the singular turning points purported to be. Once the Geological sub process was completed, important events continued to take place for a long stretch of time (almost 800 million years) at a maximum but rather constant rate. Cambrian explosion was one such event; others were:  Appearance of invertebrates  Plants colonized land  Appearance of amphibians  Appearance of insects  Appearance of reptiles  Mass extinction (trilobites)  Appearance of dinosaurs and mammals  Birds evolved from reptiles  Appearance of flowering plants  Asteroid collision and the ensuing mass extinction (including dinosaurs) All these events took place between the end of Geological (around 800 million years ago) and the beginning of Hominization growth phases (around 20 million years ago), and are roughly of comparable time spacing (hence complexity) and importance.
Special significance has been attributed to the Cambrian explosion-and other events like the invention of agriculture, the discovery of DNA and nuclear energy, and Internet and the sequencing of the human genome-and yet they do not constitute turning points between distinct evolutionary growth processes but rather occupy the stretches of time characterized by uniform change between the end of one sub process and the beginning of the next one. Contrary to what one may have expected, complexity increased at a rather constant rate-albeit large-during the twentieth century. The major thrust forward of the scientific evolutionary process took place earlier, around the discovery of the steam engine.
A better identification for the seven evolutionary sub processes is provided by events that occupy the time period when the rate of growth of complexity underwent a sharp increase. These are events around the 50% points of Table III such  Another interesting observation in the close-up picture of Figure 5 is the miniscule rate of growth of complexity before significant life forms appeared (i.e., before hominization). This fact concords with the well-accepted notion that there was no complex matter in the universe before life. According to astrochemists, we can't find complex molecules in the universe outside of life.

Sitting on Top of the World
Summarizing the conclusions we can say that the Universe's complexity has been growing along a large-scale logistic pattern that has just reached its mid point. In fact, the rate of complexity's growth has just reached its maximum, after having gone through seven steps each of which can itself be interpreted as a natural-growth sub process. As the rate of change begins declining, the next sub process is expected to be a downward step following an upside-down S-curve.
But the analysis of complexity's evolution also gave an exponential pattern-if with lower confidence level-as a possibility for the appearance of future milestones. For skeptics of logistics, those who advocate that complexity can continue growing exponentially, Table II tells us that the next milestone should be in 13.4 years, the following one in 6.3 years, the one after that in 3 years, and then again in 1.4 years, and so on. But the pattern becomes so steep that all future milestones are expected to appear in less than 26 years from now. 3 In other words people who will still be alive in 2026-i.e., the generation of people born in the mid 1940s or later-will have witnessed before they die all the change that can ever take place! Therefore, in addition to the goodness-of-fit argument, there is a common-sense argument that favors the logistic-law alternative. But the logistic life cycle also peaks during the lifetime of people born in the mid 1940s. In particular it spells out that we are presently traversing the only time in the history of the Universe in which 80 calendar years can witness change in complexity coming from as many as three evolutionary milestones. We happen to be positioned at the world's prime! Coincidentally the mid 1940s is the time of the baby boom that creates a bulge on the population distribution. As if by some divine artifact a larger-than-usual sample of individuals was meant to experience this exceptionally turbulent moment in the evolution of the cosmos.
The author would like to thank Eric L. Schwartz, professor of Cognitive and Neural Systems at Boston University, for many useful discussions the first one of which led to the conception of this research work.

APPENDIX A
This appendix contains the raw data used in the paper. The first twelve data sets, provided by independent sources, influence to some extent the thirteenth data set compiled by the author. Milestones denote dates; consequently events occurring at the same time are represented by a single milestone.

The data set of Carl Sagan as outlined in his Cosmic Calendar.[4]
The precise year numbers have been assigned by the author.

Milestone
Years ago 1.0E+06 20 Acquisition of written language 5000 21 They learn that knowledge comes from observation and experiment (scientific method) 500 22 Ability to control nature gives rise to a human population explosion 200 23 The above abilities give rise to a remarkable understanding of nature 100 24 Human activities devastate species and the environment -25 Humans disappear --geological forces and evolution continue -6. The data set below represents "major events in the Universe history" as published in Scientific American by John D. Barrow