Emergence versus neoclassical reductions in economics

ABSTRACT Many epistemic anomalies of the neoclassical research programme originate from its ontologically reductionist meta-axioms, which predicate how economic macro-systems are constituted from their micro-entities and how the latter behave – namely atomistic aggregativity, normative equilibration and global instrumental rationality. This paper explores the metaphysical foundations of the premise of emergence and argues that it can be a remedy to the ills of neoclassical reductions, and a foundational epistemic principle in a progressive systemic research programme in economics, which would bridge existing streams of ‘heterodox’ economic theory.


Introduction
The neoclassical research programme (NRP) in economics, namely the meta-theory that rests on the axiomatic assumptions of rational choice, individualistic utility or profit maximisation and ex ante equilibration of aggregate demand and supply, has shown an extraordinary resilience vis-à-vis epistemic anomalies identified since its early days. 1 These anomalies are either ignored or tackled with ad hoc extensions of its 'protective belt' of auxiliary assumptions, leaving its 'hard core' intact, and effectively extending its life and safeguarding its dominant position in academia instead of enhancing its empirical content or its explanatory power. I claim that these anomalies stem directly from a set of reductionist axiomatic premises at the hard core of the NRP, which have given birth to a research programme laden with logical inconsistencies, nomological impossibilities, epistemological fallacies and lack of empirical content, and inevitably lead to its degeneration. I assert that the metaphysical principle of emergence is indispensable for understanding complex economic systems and for explaining related economic phenomena, and a potential remedy to the ills of neoclassical reductions. Emergence can become a foundational premise in a progressive (à la Lakatos) research programme in economics, which will bridge several dissident research programmes that are consistent with the metaphysics of emergence and have developed dichotomously to the NRP at the margin of the economics discipline.
Section 2 examines the metaphysical underpinnings of emergence versus reduction, its necessary conditions and its varieties. Section 3 analyses the epistemic anomalies that arise from the strong affinity of the NRP with ontological reductionism. Section 4 discusses some of the consequences of emergence for economic theory. Section 5 wraps up the arguments and reflects on the potential role of emergence as a foundational epistemic principle of a systemic research programme in economics that will bridge existing 'heterodox' theories.

The metaphysics of emergence
Emergentism arose in the late 19 th century as an ontological theory primarily intended to explain the relationship between the mind and mental processes, and the biological and physical constitution of the body, reaching its heyday under the influence of Darwinian evolutionism in the early 20 th century (Blitz, 1992). 2 The metaphysical foundations of emergence were later questioned by mainstream philosophy of science in the analyticalpositivist tradition (Nagel, 1961;Hempel, 1965), leading to the marginalisation of emergentism for a good part of the twentieth century. The resurgence of emergence in recent years is due to the shortcomings of the reductionist project and the rise of the 1 Inter alia, Veblen (1898); Kaldor (1972); Robinson (1974); Sen (1977); Georgescu-Roegen (1979); Mirowski (1989); Kirman (1989); Ormerod (1994); Blaug (1998); Keen (2001); Ackerman (2002); Lawson (2003).
2 Emergentism in the modern era is associated with the work of Lewes (1875), preceded by Mill (1843), and Bain (1870), and followed by a generation of British emergentists (Alexander, 1920;Morgan, 1923;and Broad, 1925), as well as the idiosyncratic work of Hartmann (1940) in the Continental post-Kantian tradition. In the influential work of Morgan (1923), in particular, it blends with the Darwinian theory of evolution ('emergent evolution'), while in the work of Broad (1925) it makes inroads to analytic philosophy (Blitz, 1992). science(s) of complexity, for which emergence is a foundational premise. In this context, emergence is associated with a wide range of physico-chemical, biological, cognitive and socio-economic phenomena. However, emergence has yet to find its right place in the social sciences, and even more so in economic theory.

Mereology, dependence and the layered model of the world
The common understanding of emergence is distilled in the motto 'the whole is more than the sum of its parts', inaccurately attributed to Aristotle. Emergence as a philosophical concept has subtler ontological and epistemological ramifications than this aphorism implies, yet this evokes an important aspect of emergence, namely that it is primarily about mereological relations. 3 That 'the whole is more than the sum of its parts' is the vernacular interpretation of the principle of non-mereological composition, which is indeed discussed in Aristotle's Μετὰ τὰ Φυσικά: It refers to the "composition of elements which results in a whole that is different from the mereological fusion of these elements" (Scaltsas, 1990: 583), and is one of the widely recognised necessary conditions for emergence. 4 A similar concept is Wimsatt's (2000) non-aggregativity. 5 An essential framework for the emergentist/reductionist debate is the layered model of the world (Kim, 1999), which assumes that the world is organised in a nested hierarchy of mereologically dependent levels of upward-increasing complexity, or in other words, that "the natural world [is] stratified into levels, from lower to higher, from the basic to the constructed and evolved, from the simple to the more complex" (Kim, 1999: 19), with entities at the lower level being constituent parts of those at the immediately higher one. The layered model implicitly assumes ontological monism -a common feature of both reductionist and emergentist theories. 6 However, in the former, as Silberstein (2002: 81) explains, the most fundamental physical level is assumed to be the 'real' ontology of the world, while all other levels must be able to be mapped onto or built out of its elements, and as a result, 'fundamental theory' is perceived as having greater predictive and explanatory power, and providing a deeper understanding of the world. On the contrary, the latter "[rejects] the idea that there is any fundamental level of ontology. It holds that the best understanding of complex systems must be sought at the level of the structure, behaviour and laws of the whole system and that science may require a plurality of theories" (ibid.). In other words, emergentism, unlike reductionism, considers that higher ontological levels are irreducible to lower ones, and hence they retain a certain degree of ontological autonomy. A metaphysical principle closely related to ontological monism is the causal closure of the physical world, 7 which is consistent with reductive physicalism but largely incompatible with emergence.
Supervenience is a common type of inter-level dependence associated with both emergence and reduction. A generic definition is that "a set of properties Y supervenes upon another set X only when no two objects can differ with respect to their Y-properties without also differing with respect to their X-properties" (McLaughlin & Bennett, 2005). Supervenience is a reflexive, transitive but non-symmetric relation, which can hold with nomological and/or metaphysical necessity, and in general does not require ontological dependence or entailment (ibid.). When mereological dependence between different-level entities is involved, a similar definition is that two mereologically complex entities with identical micro-specifications cannot differ at their own level; or in other words, that a difference between two composite entities requires a difference between their constituent parts. By this definition, a lower-level difference is a necessary but not sufficient condition for a higherlevel difference. A stronger form of this principle is that "wholes are completely determined, causally and ontologically, by their parts" (Kim, 1978: 154). This additionally entails upward necessitation (or micro-determination) -a causal dependence between the supervenient and subvenient levels, and hence that a lower-level difference is a both necessary and sufficient condition for a higher-level difference. In my view, this strong version of mereological supervenience inevitably leads to ontological reduction. 8 Only the weak version of supervenience is consistent with emergence, as it permits multiple realisability -the premise that different micro-specifications may generate identical composite entities -in lieu of upward necessitation.

Epistemological and ontological reductions
The primary relata of epistemological reduction are theories or models. Nagel's (1961) classic model of inter-theoretic reduction essentially consists in a Hempelian-style deductivenomological explanation of the reduced theory by the reducing one (Sarkar, 1992: 172). 9 A necessary condition for inter-theoretic reduction is 'connectability' on the basis of 'bridge laws' that essentially translate the predicates of the reduced theory into those of the reducing one. 10 Nagelian reduction, but also the deductive-nomological method of explanation by Hempel (1965) and their successors, draw their origins in the logical positivism of the Wiener Kreis (Carnap, Neurath, Feigl, et al.) and the project for the 'Unity of Science': the quest for a universal scientific 'language' that will unify all scientific disciplines corresponding to different stratified levels of reality, including the special ones (e.g. the social sciences), in terms of the vocabulary of the most fundamental level, typically considered to be that of physics. 11 In the social sciences this quest has a precedent in the Comptean construal of sociology as 'social physics '. 12 Ontological reduction involves properties or entities related through some type of ontological, usually mereological, dependence, which also implies an ontological stratification of the relata (i.e. the existence of 'higher ' and 'lower' or 'prior' and 'posterior' levels). If a higherlevel Y-entity is said to be reducible to a lower-level X-entity, then X is considered to be more fundamental than, to be prior to, to constitute, or to determine Y. Silberstein & McGeever (1999: 182-183) distinguish two types of ontological reduction: a strong type proposed by Silberstein (2002: 83) also considers mereological supervenience to be a form of ontological reduction. In this paper I treat mereological supervenience as a weak form of ontological reduction, but, following McLaughlin & Bennett (2005: §3.3), I recognise that in general supervenience and reduction are distinct concepts. 9 The models that followed, as Silberstein (2002: 85) observes, are in one way or another either modifications or refutations of the standard Nagelian model. These include: Kemeny & Oppenheim (1956), Schaffner (1967), Nickles (1973), and Wimsatt (1976). 10 A bridge law is a well-defined biconditional between nomically coextensive predicates of the reduced and reducing theories.
11 "…the thesis defended in this paper [is] that science is a unity, that all empirical statements can be expressed in a single language, all states of affairs are of one kind and are known by the same method." (Carnap, 1934: 32) 12 "J'entends par physique sociale la science qui a pour objet propre l'étude des phénomènes sociaux, considérés dans le même esprit que les phénomènes astronomiques, physiques, chimiques et physiologiques, c'est-à-dire assujettis à des lois naturelles invariables, dont la découverte est le but spécial de ses recherches." (Compte, 1819-1826: Opuscules de Philosophie Sociale). Scharf (1989), whereby Y-entities are nothing but sums of X-entities, and the properties of Yentities are fully explainable in terms of X-entities and their inter-relations, and which entails mereological decomposition and thus the elimination of the stratification of the relata; and a weak type identical to the strong version of mereological supervenience proposed by Kim (1978), which entails micro-determination and preserves the stratification of the relata.

Definition(s) and varieties of emergence
It is notoriously difficult to define emergence in a robust, non-trivial and useful way. In recent work in analytical philosophy and in complexity theory there is a growing consensus as to the minimal conditions for an entity, a property, a law or a phenomenon to qualify as 'emergent'. A common basis for a definition of emergence can be found by looking back to the source: One of the most prominent British emergentists of that time, Broad (1925), comparing the two competing construals, 'Mechanism' -which roughly corresponds to reductionism -and 'Emergence', gives the following description of the latter: On the emergent theory we have to reconcile ourselves to much less unity in the external world and a much less intimate connexion between the various sciences. At best the external world and the various sciences that deal with it will form a kind of hierarchy. We might, if we liked, keep the view that there is only one fundamental kind of stuff. But we should have to recognise aggregates of various orders. And there would be two fundamentally different types of law, which might be called 'intra-ordinal' and 'transordinal' respectively. A trans-ordinal law would be one which connects the properties of aggregates of adjacent orders. A and B would be adjacent, and in ascending order, if every aggregate of order B is composed of aggregates of order A, and if it has certain properties which no aggregate of order A possesses and which cannot be deduced from the Aproperties and the structure of the B-complex by any law of composition which has manifested itself at lower levels. An intra-ordinal law would be one which connects the properties of aggregates of the same order. A trans-ordinal law would be a statement of the irreducible fact that an aggregate composed of aggregates of the next lower order in such and such proportions and arrangements has such and such characteristic and nondeducible properties. (Broad, 1925: 77).
From this passage we understand the following: § The physical world is composed of a single 'substance'; hence emergence is consistent with ontological (materialist) monism. § The world is organised as a nested hierarchy of levels ('orders'). § Adjacent levels are connected mereologically (higher-level entities are composed of lower-level ones) and nomologically ('trans-ordinal laws' connect the properties of higher-and lower-level entities). § The properties of entities within the same level are also connected nomologically but through a different type of laws, which are specific to their level ('intra-ordinal laws'); hence ontological monism is coupled with nomological pluralism. § Higher-level entities possess some properties which are mereologically irreducible to those of lower-level entities, despite being nomologically connected with them through the trans-ordinal laws. O'Connor (1994: 97-98) formally defines an emergent property of a mereologically composite object to be a property that (i) supervenes on the properties of the object's parts; (ii) is not had by any of the object's parts; (iii) is distinct from any structural property of the object; 13 and (iv) has downward determinative influence on the patterns of behaviour of the object's parts. This definition adds two highly debated but also commonly accepted 'trans-ordinal' conditions of emergence not explicitly included in Broad's account: supervenience and downward causation. 14 The compatibility of supervenience with emergence, and in particular with the conditions of downward causation and non-structurality, 15 has been challenged, and alternative definitions have been advanced. Humphreys (1997) rejects synchronic supervenience as a condition for emergence and introduces the diachronic concept of fusion emergence, whereby the emergens results from the 'fusion' of the emergenda. In this process the former acquires novel causal powers, while the latter cease to exist as ontologically autonomous entities. He postulates that emergent properties are characterised by novelty, which means that "a previously uninstantiated property comes to have an instance", and are qualitatively different from the properties from which they emerge; it is logically or nomologically impossible for them to be possessed at a lower level; different laws apply to their features than to the features from which they emerge; it is nomologically necessary for their existence that there be an essential interaction between their constituent properties; and they are global properties of the entire system rather than local properties of its constituents (Humphreys, 1997: S341-342). In a similar vein, O'Connor (2000), partly repudiating his earlier definition, abandons supervenience in favour of a dynamical concept of emergence as a non-deterministic causal relationship between the micro-states of the constituent parts of a complex object and its macro-state. From a different perspective, Bedau (1997; proposes a weak version of emergence which relaxes the disputed assumption of downward causation as well as the condition of non-structurality, and which is claimed to be "metaphysically innocent, consistent with materialism, and scientifically useful, especially in the sciences of complexity" (Bedau, 1997: 376). He defines a macro-state of a system (i.e. a structural property of a system constituted wholly out of its micro-states) to be weakly emergent if it can be derived from the system's external conditions and its micro-dynamics (i.e. the dynamics which determine the time evolution of the system's microstates) only by simulation (Bedau, 1997: 378).
An explicit distinction between synchronic and diachronic emergence is drawn by Rueger (2000), who, nevertheless, in both cases refers to a weak version of emergence. Diachronic emergence is an evolutionary phenomenon observed in the structural properties of dynamical systems and strongly associated with endogenous novelty. Crutchfield (1994: 12) defines diachronic emergence as "a process that leads to the appearance of structure not directly described by the defining constraints and instantaneous forces that control a system". In a similar vein, Durlauf (2001) considers (diachronic) emergence to be the spontaneous generation of new higher-level properties of a system that are not present at the lower level of its constituent units; as a result, the examination of the constituent units of the system in isolation does not reveal its aggregate dynamics, despite the fact that its initial conditions and its evolutionary trajectory (its 'history') are still relevant in understanding its present condition.
A further distinction between ontological and epistemological emergence, similar to the one between ontological and epistemological reduction, is drawn by Silberstein & McGeever (1999). They conceive the former as the failure of both strong and weak forms of reductionism (corresponding to mereological fusion and mereological supervenience respectively), which involves wholes that possess causal powers not reducible either to the intrinsic causal powers of their parts or to any relations between these parts. Their construal of epistemological emergence, on the other hand, is that it occurs whenever a property of a whole is reducible to or determined by the intrinsic properties of its parts, while at the same time it is very difficult to be explained, predicted or derived on the basis of the properties of its parts. They conclude that "epistemologically emergent properties are novel only at a level of description" (Silberstein & McGeever, 1999: 186). This is almost equivalent to the weak emergence proposed by Bedau (1997). It is also similar to Teller's (1992: 140-141) definition, which states that 'a property is emergent if and only if it is not explicitly definable in terms of the non-relational properties of any of the object's proper parts', allowing for an object's property to be considered emergent even if it is deducible from the relational properties of its parts. Thus defined, and without any further restrictions, emergent properties would include many trivial cases. These definitions of emergence leave open the possibility that the irreducibility of an observed phenomenon be due to the framing and reification of the phenomenon or even to the observer's cognitive limitations. In this context, the term phenomenological in lieu of 'epistemological' emergence is preferable.
In conclusion, ontological emergence is the formation of mereologically complex entities, which possess new, effective causal powers and enjoy a certain nomological autonomy visà-vis their constituent entities. To this should be added the important time dimension of ontological emergence noted in some of the above references: In the process of ontological emergence there is always a prior (pre-emergent) and a posterior stage, in which novel, more complex entities or properties come to existence. Ontological emergence is, therefore, necessarily diachronic. Phenomenological emergence, on the other hand, is the appearance of higher-level phenomena (e.g. patterns, analytical classes, organisational forms) that supervene on the properties of lower-level entities, but their analytical reduction to these properties is infeasible, although their probabilistic or simulation modelling on that basis is not excluded.
To sum up the above propositions more systematically, ontological emergence is the diachronic formation of complex entities under the following macroscopic conditions: C1. Layered model of the world, which assumes i. ontological (materialist) monism but without physical causal closure; ii. a hierarchy of increasingly complex, mereologically dependent levels. C2. Weak inter-level supervenience with nomological autonomy and multiple realisability. C3. Non-mereological composition, which entails that complex entities possess nonaggregative and non-structural properties. C4. Downward effective causation.
These definitions distinguish emergence from ontological reductionism, which is also monist but requires strong supervenience, rejects the ontological autonomy of the hierarchical levels and takes higher-level properties as fully reducible to lower-level, 'basal' ones; from epiphenomenalism, which is monist but essentially rejects supervenience and downward causation; and from all forms of ontological dualism, which assumes the existence of more than one 'substances'.
The above conditions focus on part-whole, and hence inter-level relations, without saying much about intra-level (i.e. part-part) relations that make emergence happen. Emergence requires the enduring interaction of parts that share some relational property; non-interacting objects cannot generate emergent entities or properties. Not each part of a whole interacts with every other; a relatively stable but evolving anisotropic plexus of interactions with feedback loops connects the parts within the whole. Interacting entities are numerous and diverse in their properties; those with identical properties would generate compound entities that would be mereologically decomposable, and hence not emergent; multiplicity and diversity give rise to asymmetric interactions, which favour emergence. Interactions are non-linear, i.e. not additive or multiplicatively scalable (see §4.2); linearity leads to aggregativity, which is the nemesis of emergence. Finally, the outcome of the interactions is undecidable. 16 These microscopic conditions are summarised as follows: c1. Multiplicity and heterogeneity of parts c2. Non-linear interactions between parts with undecidable outcomes c2. Quasi-stable, anisotropic plexus of interactions with feedback loops

Neoclassical reductions in economic theory
Three meta-axioms establish the hard core of the NRP: global instrumental rationality, atomistic aggregativity, and normative equilibration. 17 The first premise, a behavioural principle of utilitarian origins, entails that self-interested economic agents optimise their objective functions globally, subject to feasibility constraints, and coordinated through a price mechanism. Their optimisation capacities are unlimited and innate, while the preferences that define their objective functions are a priori and always consistent. The second premise entails that aggregate economic phenomena, and by extension the macro-properties of economic systems, are fully reducible to individual actions determined by atomistic behavioural rules, which are seen as their micro-foundations. The third premise places the physico-chemical concept of equilibrium at the centre stage of economic analysis. This normative heuristic leads to the construction of models that are a priori focused on the equilibrium aspects of economic systems and on their behavioural micro-foundations that are consistent with these aspects (i.e. global optimisation behaviour), while it leaves out-ofequilibrium dynamics, including the process through which economic systems reach their putative equilibria, out of their scope (Blaug, 1998). The first meta-axiom explains the behaviour of the micro-entities, while the second and third meta-axioms jointly explain the constitution of macro-systems from their micro-entities. In the following paragraphs I demonstrate that these meta-axioms motivate a research programme that espouses ontological reductionism, and I discuss some of the insuperable problems this espousal has given rise to. Weber (1922: 13) postulates that "[social] collectivities must be treated as solely the resultants and modes of organisation of the particular acts of individual persons, since these alone can be treated as agents in a course of subjectively understandable action". Heath (2015) claims that the Weberian version of methodological individualism 18 should be distinguished from older types of atomism, in that it does not involve any ontological commitments, in particular with regard to "the content of the intentional states that motivate individuals", and thus "remains open to the possibility that human psychology may have an irreducibly social dimension", whereas atomism "entails a complete reduction of sociology to psychology". However, the assumption that social collectivities are fully determined, both causally and ontologically, by individual agents inevitably leads to an ontological reduction of the type of mereological supervenience, whereby collectivities ensue as aggregations of individuals possessing relational properties, and so all social phenomena are ultimately derivable from individuals' (intrinsic and relational) properties. This assumption is not, therefore, as ontologically innocent as Heath claims.

The problem of collectivity
Hodgson (2007) draws a distinction between 'methodological individualism' understood as a theory of social explanation and 'ontological individualism' as a theory of social ontology, and observes that the two premises are often conflated under the former term. He then advances the position, reflecting Arrow's (1994) similar arguments, that "all satisfactory and successful explanations of social phenomena involve interactive relations between individuals" since "the individual is a social being enmeshed in relations with others". This is claimed to be true even for neoclassical economics, given that "price mechanisms involve social interactions and structures, and social phenomena that cannot be reduced entirely to individuals alone". This leads him to the conclusion that the term 'methodological individualism' is per se ill-defined and eventually redundant.
The utilitarian homo oeconomicus as sketched by J.S. Mill (1844) is the embodiment of the doctrine of individualism in economics, 19 which is also strongly advocated by Menger (1883) in the context of his Methodenstreit against the Historical School and in opposition to the Classics. 20 The variety of individualism espoused by the NRP involves even stronger ontological commitments than that of Menger's and similar to that of J.S. Mill's: The subject of the NRP, the homo oeconomicus, is nothing but an optimiser with a simple, single-argument (i.e. utility or profit) objective function, whose optimisation is global and unaffected by other individuals' actions, social relations, and collective norms, values and institutions except from the apparatus of the price mechanism. The intentional state that governs her actions is thus a pre-social instrumental rationality. The interpersonal incomparability of individual utility functions and the strictly individualistic Pareto optimality criterion are direct corollaries of these assumptions. The redundancy of social structure and collectivities under the NRP means that sociology is reduced to psychology and collective phenomena to atomistic intentional states. Arrow's (1994) claim that even a neoclassical economy involves irreducible social institutions and structures (such as the price mechanism) is hence contestable: While this is true for real economic systems, the NRP axiomatically assumes and aspires to model all economic phenomena as if they derived from atomistic attributes (preferences, endowments, technologies), resulting in an internal inconsistency discussed in §3.2. Neoclassical individualism is, therefore, a strong form of ontological reduction of the type of mereological decomposition, 21 which, as the following paragraphs will make clearer, precludes dynamic interactions between individuals.

The problem of aggregativity
A serious internal inconsistency of the NRP is demonstrated by the famous Sonnenschein-Mantel-Debreu (SMD) theorem (Sonnenschein, 1973;Mantel, 1974;Debreu, 1974). The theorem states that for any function which is continuous, homogeneous of degree zero and complying with Walras' law there is at least one price vector corresponding to an economy whose aggregate demand function is the function in question. This implies that while the properties of the aggregate excess demand function inherited from individual excess demand functions guarantee the existence of an equilibrium, they are not sufficient to guarantee its uniqueness, and as it has been additionally shown, its stability (Sonnenschein, 1973). For a desires to possess wealth, and who is capable of judging of the comparative efficacy of means for obtaining that end.' 20 Menger (1883: 196) claims that his opponents aimed at understanding economic phenomena "from the point of view of the 'national economy' fiction", and asserts that the goal of economic research should instead be "the explanation of the complicated phenomena of human economy in their present-day social form through the efforts and relationships of the individual economies connected by their commerce with each other." Here the term 'individual economies' is used in the sense of 'individual economic agents'. 21 A mereological decomposition statement is that society is nothing but a sum of atomistic units, as in the notorious Thatcherite dictum " […] there is no such thing as society. There are individual men and women, and there are families." unique price vector to exist, additional artificial and arguably unnatural micro-behavioural assumptions beyond those imposed by the Weak Axiom of Revealed Preference (WARP) have been proposed (e.g. homothetic preferences, collinear initial endowment vectors, etc.) but have proven to be futile (Mantel, 1976;Kirman & Koch, 1986). The uniqueness of equilibrium would be guaranteed only if the WARP assumptions were imposed on the aggregate excess demand function itself, which, however, would spell the demise of the microfoundations of the theory. The implications of the SMD theorem are devastating for all ramifications of general equilibrium theory (Rizvi, 2006): Kehoe (1985) shows that comparative statics analysis fails unless either the demand side of an economy behaves like a single consumer or the supply side is an input-output system. Roberts & Sonnenschein (1977) extend the implications of the SMD theorem in the case of the imperfectly competitive general equilibrium. Kemp & Shimomura (2002) show that under less restrictive (and more realistic) assumptions, the stylised Heckscher-Ohlin model of international trade fails to fulfil the comparative statics propositions of general-equilibrium trade theory, even "without appealing to increasing returns to scale, the multiplicity of equilibrium or the noncompetitiveness of markets". The list of articles underlying the negative implications of the SMD theorem for the consistency of general equilibrium theory and, by extension, for the NRP is endless. The failure to prove the uniqueness and stability of general equilibrium leads to the conclusion that the theory is 'unfalsifiable' in Popperian terms, and for this reason the SMD theorem is colloquially known as the 'anything-goes' theorem. Ultimately, the theorem demonstrates the failure of atomistic aggregativity, even when the axiomatic assumptions on individual economic behaviour are taken to be correct.
In view of this major failure, the NRP goes a step further, to assume the existence of a 'representative agent' as the embodiment of economic collectivities and as a bridge between the macro-economy and the assumed individual micro-behaviour. A strident critique of this heuristic comes from Kirman (1992), who argues that it has been devised to save general equilibrium theory vis-à-vis its failure to provide micro-foundations to collective economic behaviour. He claims that "whatever the objective of the modeller, there is no plausible formal justification for the assumption that the aggregate of individuals, even maximisers, acts itself like an individual maximiser. Individual maximisation does not engender collective rationality, nor does the fact that the collectivity exhibits a certain rationality necessarily imply that individuals act rationally. There is simply no direct relation between individual and collective behaviour", and he concludes that "it is clear that the 'representative' agent deserves a decent burial, as an approach to economic analysis that is not only primitive, but fundamentally erroneous" (Kirman, 1992: 118). His arguments are that "well-behaved individuals need not produce a well-behaved representative agent; that the reaction of a representative agent to change need not reflect how the individuals of the economy would respond to change; that the preferences of a representative agent over choices may be diametrically opposed to those of society as a whole -it is clear that the representative agent should have no future. Indeed, contrary to what current macroeconomic practice would seem to suggest, requiring heterogeneity of agents within the competitive general equilibrium model may help to recover aggregate properties which may be useful for macroeconomic analysis." (Kirman, 1992: 134). The concept of the representative agent goes beyond ontological reductionism: it is an eliminativist heuristic that renders redundant a whole ontological level, that of mereologically complex and heterogeneous economic systems, by collapsing it into the subvenient level of atomic economic agents.

The problem of convergence
Another problem with the NRP emerges when we consider how economic systems converge to their putative equilibria. Walras assumed that the plans of all economic agents become compatible through a centralised, instantaneous and costless iterative auction process (tâtonnement), which is concluded before any real marketplace interaction takes place, hence the term 'ex ante equilibrium'. Leijonhufvud (1967) proposed, not without a hint of irony, the concept of Walrasian auctioneer, a fictitious demon who drives the tâtonnement process like a deus ex machina by matching supply and demand in all markets before marketplace transactions occur and announces equilibrium prices. A number of ad hoc and unrealistic behavioural assumptions about individual preferences, namely the rational choice axioms, and about the convergence process itself, namely the existence of a centrally coordinated price mechanism, ensure the existence of equilibrium and, in principle, the nomological possibility of the Walrasian demon is not excluded, even though the computability of general equilibrium is not always guaranteed (Richter & Wong, 1999), and the informational requirements to ensure convergence are discouragingly large (Saari, 1985).
However, in the absence of this centrally coordinated price mechanism, i.e. in the absence of an actual Walrasian auctioneer, there is no unique 'natural' convergence path to equilibrium (Ackerman, 2002). As Tesfatsion (2006: 176-177) observes, this centrally coordinated price mechanism aims to eliminate the possibility of strategic behaviour and to replace decentralised agent interactions by an orderly payment system, whereby both households and firms "take prices and dividend payments as given aspects of their decision problems outside of their control […] with no perceived dependence on the actions of other agents". In the absence of this fictitious mechanism the model becomes analytically intractable; "the modeller must now come to grips with challenging issues such as asymmetric information, strategic interaction, expectation formation on the basis of limited information, mutual learning, social norms, transaction costs, externalities, market power, predation, collusion, and the possibility of coordination failure (convergence to a Paretodominated equilibrium). The prevalence of market protocols, rationing rules, antitrust legislation, and other types of institutions in real-world macroeconomies is now better understood as a potentially critical scaffolding needed to ensure orderly economic process." (ibid.) The centrally coordinated costless and atemporal tâtonnement process or, equivalently, the Walrasian auctioneer, are heuristics invented to eliminate the complex dynamics that would emerge from decentralised interactions of economic agents and threaten the existence of (fixed point) equilibria, and thus serve the meta-axiom of normative equilibration. Nevertheless, the need for a central coordination mechanism implies that an economic system cannot be completely mereologically decomposed to atomistic individuals, and hence is inconsistent with the meta-axiom of atomistic aggregativity. 22

The problem of time
More and more serious problems arise when considering the time dimension of economic phenomena, which is by construction absent in the Walrasian model. Hicks (1939), influenced by Lindahl (1939), injects temporal dynamics into the static Walrasian model by assuming a sequence of (short-run) 'temporary' equilibria, which, however, do not necessarily lead to a Pareto optimal (long-run) 'equilibrium over time' in the absence of consistent expectations of future prices. The mathematically elegant Arrow-Debreu (1954) intertemporal equilibrium model, which is considered foundational for modern general equilibrium theory, assumes the existence of complete markets in 'contingent commodities', i.e. forward markets for commodities in contingent states of the world -a heuristic clearly devoid of empirical content. Similar to the static Walrasian model, equilibration here is single-stage, instantaneous and ex ante (and hence implicitly requiring an intertemporal Walrasian auctioneer), and the outcome is Pareto efficient. A reinterpretation of the Arrow-Debreu model, which removes the untenable assumption of complete forward markets and yields an identical resource allocation, is Radner's (1972) sequential equilibrium model. Here a single state-contingent commodity is traded as an 'Arrow security' (a theoretical financial instrument with a state-contingent payoff), while economic agents form expectations about future spot prices of all other commodities in each possible state of the world. This allows sequential instead of instantaneous equilibration but requires economic agents to possess perfect foresight, in the sense of complete ex ante knowledge of spot prices in each state of the world, thus imposing on them effectively the same informational and computational requirements as those had by the (intertemporal) Walrasian auctioneer. The rational expectations hypothesis (Muth, 1961;Lucas, 1972) extends this line of thinking in a world of uncertainty by assuming that individual agents' expectations cannot differ systematically from the 'true' expectation that follows from the economic model. Without relaxing the informational and computational requirements of perfect foresight, 23 the theory imposes the additional condition that all stochastic processes involved in the economic model have known (or inferable) probability distributions, and a fortiori, are ergodic. 24 This heuristic casts off all-pervasive real-world irreducible uncertainty and replaces it with measurable uncertainty (of the 'Knightian risk' type). It also affirms the centrality of the ergodic hypothesis in the NRP, which Samuelson himself has designated as a "sine-qua-non condition for economics as a science" (quoted by Davidson, 2012). The rational expectations 'revolution' inspired a whole range of macroeconomic models on Walrasian grounds, from 'new Keynesian' to real business cycles and dynamic stochastic general equilibrium.
Pseudo-dynamic neo-Walrasian models commit a strong diachronic reduction with much more serious ontological implications than the synchronic mereological decomposition committed by the Walrasian model: Here the posterior states of the economy collapse into a prior state of initial endowments, preferences and technologies, and so dynamics are reduced to statics and fundamental uncertainty to calculable risk and are effectively eliminated. The time evolution of the neo-Walrasian economy is not only deterministic but predetermined. Perfect foresight entails nomological determinism -the precept that all phenomena are completely nomologically necessitated by, and hence fully explainable in terms of, fundamental laws and their set of initial conditions. It further entails a stronger forwardlooking expression of this precept, which we may call Laplacian predeterminism: the assumption that the future state of a system can be fully predicted on the basis of fundamental laws and a full knowledge of the sets of initial conditions of its constituent entities. 25 Neo-Walrasian economies are thus by construction Laplacian, in that their future states are fully predictable from the initial conditions of their constituent entities, i.e. agents' endowments, preferences and technologies. Even when uncertainty, which is intrinsically related to the unfolding of 23 The theory assumes that the economic agents' information sets contain full knowledge of the true structure of the economic model, including other agents' decision rules and the true values of deterministic variables, and the realised values and probability distributions of all exogenous random variables.
24 A stochastic process is ergodic if its time average converges to its ensemble average (i.e. its expectation value).
In practical terms, this entails that some aspects of the probability distribution of the stochastic process (such as its first moment) can be inferred from any of its realisations over a 'sufficiently long' time interval. 25 This precept has been most explicitly advocated by Laplace (1825). The fictitious intelligence that conducts the necessary calculations for such a prediction at a universal scale, Laplace's demon, bears considerable similarities to Maxwell's demon in thermodynamics, and the Walrasian auctioneer in economics. Today it is understood that the computational requirements of Laplace's demon render the Laplacian conjecture nomologically impossible. The Walrasian demon is nomologically possible but only under very restrictive and unrealistic behavioural assumptions about the 'atomic particles', i.e. the economic agents. However, the intertemporal Walrasian auctioneer has much higher computational requirements and as a result neo-Walrasian general equilibrium models too often prove to be non-computable (Richter & Wong, 1999;Velupillai, 2009). real time, is introduced in these models, it takes the form of ergodic stochasticity, and thus becomes neutralised, and eventually is thrown out of the picture together with time.

Economies as many-body systems
Physical and biological many-body systems are typical sources of metaphors for economics but also of analogies that elucidate the deeper generative mechanisms of economic systems, and with them, the nature of emergence.
An archetypal metaphor for the NRP is that of the celestial many-body systems of 'rational mechanics'. 26 These closed, 27 time-reversal invariant Hamiltonian dynamical systems consist of non-colliding bodies that interact linearly and at a distance via a conservative central force field (i.e. the gravitational field). 28 Their macro-behaviour is fully reducible to the physical laws that apply to their micro-constituents and, a fortiori, they possess no other macro-properties than aggregated intrinsic (e.g. mass, momentum) and superposed relational properties (e.g. attractive forces) of their micro-constituents. Hence, they are resultant entities with deterministic dynamics. 29 (Neo-)Walrasian economies are similarly closed, structurally stable and timereversible, consist of individual agents that interact via a price vector field, but are construed as static systems with fixed-point equilibria that lack proper dynamics and meaningful conserved quantities. 30 The difficulty to find any conservation laws analogous to those of Hamiltonian mechanics reflects precisely the fact that real economic systems are dissipative (see below) rather than conservative.
27 A system is closed (or isolated in thermodynamic terms) when it does not exchange matter or energy with its environment (for a generic and more formal definition of open and closed systems see Bunge, 1979). It should be noted that system closure is defined very differently, in terms of 'event regularities', in the 'critical realist' literature (on this point see Fleetwood, 2017). 28 A system is conservative when it conserves quantities such as mechanical energy and angular momentum and preserves the volume of its phase space. It is time-reversal invariant when it is governed by laws that would apply in the same way even if time ran in the opposite direction. An interaction is linear when it complies with the superposition principle, which entails additivity and first-degree homogeneity, and can thus have a vector representation.
29 An n-body system is generally non-integrable, i.e. it has no analytical solutions (except in 2-and some special cases of 3-body systems). Non-integrable Hamiltonian systems exhibit deterministic chaos, which makes their macro-behaviour similar to stochastic, but are ergodic in large parts of their phase space (Frigg, 2012). 30 As Mirowski (1989) has masterly demonstrated, there is no meaningful 'conserved quantity' in neoclassical economics similar to energy in Hamiltonian mechanics, as 'utility' and 'value' fail to fulfil this role.
A more realistic physical analogue to macro-economies is that of high-dimensional manybody systems of free-moving and randomly colliding particles. While the phenomenological (thermodynamic) macro-description of such systems involves only a limited set of state variables in a tractable equation of state, their complete (classical or quantum mechanical) micro-description is unworkable even in the simplest models, given the dimensionality of their phase spaces. The Gibbsian device of statistical ensemble links in probabilistic terms the macroand micro-states of these systems. 31 In this scheme, many different micro-states may correspond to the same macro-state, but two different macro-states must correspond to different micro-states. And while the micro-states are subject to conservative, time-reversal invariant mechanical laws, the macro-state evolves irreversibly towards a monotonic increase in the system's entropy consistent with the second law of thermodynamics, and eventually towards statistical equilibrium -the most likely, entropy-maximising macro-state, in which all accessible micro-states become equiprobable, and which essentially functions as a stochastic attractor of the system. 32 These many-body systems are by assumption closed, ergodic, and in or near equilibrium. They typically exhibit a phenomenological stratification, whereby an analytically irreducible macro level, which is nomologically autonomous but lacks effective downward causal powers, weakly supervenes on the micro level with multiple realisability. They have predictable macro-dynamics and do not develop novel properties. Therefore, they exhibit phenomenological but not ontological emergence. Foley (2003) proposes an analogous statistical equilibrium model of market exchange, in which 'feasible sets of market transactions' correspond to accessible micro-states, 'feasible market distributions' to macro-states, the market equilibrium to the macro-state with the highest multiplicity (which is the solution to a constrained entropy maximisation problem), and the market itself to the statistical ensemble. This model allows for different types of transacting agents, avoids the converge problem of the Walrasian tâtonnement, approximates Pareto efficient allocations, permits arbitrage, and allows for the market to be in equilibrium even when individual traders are not. Aoki (1996) and Aoki & Yoshikawa (2007) develop a macroeconomic modelling methodology also based on equilibrium statistical mechanics that breaks away from the NRP more radically. In very general lines, 31 A micro-state of a system is a point in the system's multidimensional phase space that represents its microconfiguration (i.e. the canonical coordinates of its particles) at a specific moment in time. A macro-state of a system is defined by the system's macro-properties (i.e. its state variables). A statistical ensemble, introduced by Gibbs (1902), is a collection of virtual copies of a system corresponding to its every possible macro-state, that defines a probability distribution over the system's micro-states, and hence over its phase space.
32 Attractor is an invariant set of the state space of a dynamical system, towards which neighbouring states of the system converge in the course of time. It can be a fixed point (point attractor), a limit cycle that corresponds to stable oscillations (periodic attractor), a hypertorus that corresponds to compound oscillations (quasi-periodic attractor) or a set of fractal dimension corresponding to deterministic chaos (strange attractor). A system may have multiple attractors.
their models assume large numbers of interacting heterogeneous agents with discrete-choice sets that define the micro-states of the economy, whose transitions follow continuous-time Markov processes. The macro-state of the economy is compatible with a multitude of microstates, and its time evolution is given by a master equation as a probability flow of microstate transitions and sojourns, with statistical equilibrium as the entropy-maximising macrostate. These models reproduce some aspects of the (post) Keynesian theory, such as effective demand and financial instability à la Minsky.
A closer analogue to real economies is the class of open, 33 non-linear, non-ergodic, timeirreversible many-body dynamical systems known as dissipative structures (Nicolis & Prigogine, 1977). 34 These exhibit anisotropic macro-properties, which effectively partition them in subsystems, and a mesoscopic level of system dynamics (Balian, 2007). Under certain conditions, a far-from-equilibrium dissipative system may reach self-organised criticality 35 -a point at which it tends to increase its internal structure by receiving energy from and dissipating entropy to its environment. The subclass of self-organised dissipative structures known as complex adaptive systems (CAS) exhibits properties that fulfil the macro-and microscopic conditions of ontological emergence ( §2.3), namely: § self-organisation, i.e. the spontaneous increase in complexity of the system's internal structure without this increase being controlled by an external system; this clearly is a non-aggregative and non-structural, downward causal property of the system; § universality, i.e. the structural stability of a system under perturbations of its microspecifications, whereby 'certain facts about the micro-constituents of the system are individually largely irrelevant for the system's behaviour at criticality, [and] rather, their collective properties dominate their critical behaviour' (Batterman, 2000: 127); this macro-property is a strong form of multiple realisability; 33 Living organisms, and biotic systems at large, are always open and never reach (chemical or thermodynamic) equilibrium as long as they are alive -they maintain a steady state, which is distinct from equilibrium (von Bertalanffy, 1968: 39). 34 Dissipative systems are non-conservative open dynamical systems, which exchange energy and matter with their environment. During their time evolution, their state-space volume contracts into an attractor of lower dimensionality, contrary to conservative systems, which preserve their state-space volume. The quantitative study of dissipative systems is the subject-matter of non-equilibrium statistical mechanics. 35 Phase transitions occur when small changes in the parameters of the system cause a qualitative change in its aggregate properties (Durlauf, 2001). The point at which a dynamical system undergoes a phase transition is a criticality. An ordinary criticality is obtained by exogenously varying the control parameters of the system, while a self-organised criticality results from its endogenous dynamics, in which case the criticality is attractor of the system. § adaptation, i.e. the ability of the system to detect environmental variation and to adjust without losing its cohesion and its systemic properties, and resilience, i.e. its ability to remain within its current attractor basin after an exogenous disturbance; § undecidable dynamics that are a source of endogenous novelty -"irregular innovation-based structure-changing dynamics associated with evolutionary biology and capitalist growth" (Markose, 2005). 36 The study of economic entities as CAS is the subject-matter of complexity economics -an embryonic research programme that breaks radically away from NRP by directly challenging its three hard-core meta-axioms. Moreover, many meso-and macroeconomic formations, such as enterprises (Fuller & Moran, 2001), supply networks and global value chains (Choi et al., 2001;Surana et al., 2005), industrial clusters (Fioretti, 2006), and the market or the capitalist economy at large (Markose, 2005;Witt, 2017), have been described or modelled as CAS in confluent streams of literature.

Emergent evolutionary dynamics in economic systems
A crucial difference between physical and biotic (i.e. biological, ecological and socioeconomic) many-body systems lies in the complexity of their micro-constituents: Those of the former are typically simple entities with nomologically determined interaction patterns. Those of the latter are CAS by themselves with an infinite variety of emergent behavioural patterns instilled and amplified by the evolutionary processes of selection, variation and retention. 37 These universal processes are central premises of evolutionary models and theories even beyond the domain of biology, as in the case of the 'Darwinian' stream of the evolutionary research programme (ERP) in economics. The ERP reinstates the idea of the relative autonomy of macro-from microeconomic phenomena, "is biased toward dynamic rather than equilibrium-oriented modes of theorising" (Hodgson, 1998: 175) 38 and treats the 36 Wolfram (1984) identifies four 'universal classes' of systems dynamics according to the qualitative nature of their attractors: those that correspond to finite-state automata with point attractors; push-down automata with periodical or quasi-periodical attractors; linear-bounded automata with strange attractors and chaotic dynamics; and Turing machines with undecidable dynamics, which are algorithmically incomputable. 37 In a biological context selection is the process that determines the survival and reproductive success of individuals by favouring some traits over others at the population level. Variation infuses new traits in populations: recombination or crossover, the reciprocal exchange of genetic material between chromosomes in the process of meiosis, is the most common source of variation; mutation is a rarer and random source of variation. Retention (or inheritance) is the process of transmission of genetic material from individuals to their offsprings, whereby variations become cumulative and cause macro-evolution and speciation. 38 Veblen (1898), a pioneer of the ERP, scorned his contemporary mainstream economists "for being still content to occupy themselves with repairing a structure and doctrines and maxims resting on natural rights, heterogeneity of economic agents as an essential feature of evolving systems -variety, after all, is a prerequisite for evolutionary dynamics (Metcalfe, 2005). Under the ERP, economic agents interact with each other and their environment in historical time and exhibit satisficing rather than optimising behaviour, 39 while their choices are determined by individual habits, collective norms and institutions, fundamental uncertainty, incomplete information, and their cognitive limitations. 40 The universal evolutionary processes instigate intra-and interspecific interactions among different groups of micro-entities that shape dynamic fitness landscapes and determine the macro-dynamics of the biotic systems. In bio-ecological systems these interactions are intergenerational, exogenously constrained by the natural environment, and apply to entire populations leading to their co-evolution. By contrast, in socio-economic systems they are typically intra-generational, endogenously constrained by the institutional environment, and apply to sub-populations leading to their co-adaptation (Chorafakis & Laget, 2009). In both cases these interactions can be competitive (or antagonistic), when different (sub-) population types (i.e. biological species or social groups) clash for the same resource; mutualistic, when they are reciprocally beneficial and hence foster symbiosis; 41 or exploitative, when a type is itself the resource of another, ranging from predation, whereby individuals from the exploited type are annihilated, to parasitism (Maynard Smith, 1989). The emergent macro-properties of socio-economic systems are hence due not only to competition -the only behavioural motive recognised by the NRP (at least before the advent of behavioural economics) -but also to various forms of mutualistic or exploitative symbiosis that establish the social division of labour. utilitarianism, an administrative expediency"; perceived the economy as "an open and evolving system,… embedded in a broader set of social, cultural, political, and power relationships" (Hodgson, 2000); and regarded "the notion of individual agents as utility-maximising… as inadequate or erroneous" (ibid.). 39 The principle of satisficing assumes that economic agents conduct search until an aspiration level of utility, i.e. a pay-off threshold, is reached, beyond which the search process is over (Simon, 1956). It is a suboptimal alternative to global utility maximisation, but its suboptimality is mitigated when the real costs of obtaining and processing complete information are taken into consideration. 40 Alchian (1950: 211) conceives the economic system as "an adoptive mechanism which chooses among exploratory actions generated by the adaptive pursuit of 'success' or 'profits'", and considers that populationwide selection dynamics rather than optimising individual choices determine the survival of economic agents and the shape of economic organisations and systems (Moe, 1984). Nelson & Winter (1982) develop several seminal models of technological change on the assumptions that firm behaviour follows a limited number of path-dependent, adaptive decision rules or 'routines', and economic growth is an open-ended disequilibrium process driven by technological change which comes about through the evolutionary selection of 'fit' routines.

Ontological levels and emergent economic entities
In agreement with the layered model of the world, real economies consist of a nested hierarchy of layers populated by economic entities of increasing scale and complexity, from individuals, through organisations, social groups and networks, to economies at various geographical scales, from the local to the national and the global. These layers are conventionally defined in terms of the size or number of entities that make them up. A distinction, however, should be drawn between layers that correspond to ad hoc aggregated and hence resultant entities, and those that correspond to universal types of emergent entities. The latter are ontological levels, whose hierarchy is determined not by the number or size but by the structural complexity of the entities that populate them. 42 The NRP typically considers a micro and a macro layer of economic activity, of which the latter is (presumed to be) fully reducible to the former, effectively leaving just one ontological level, the micro, which corresponds to the basal universal type of the homo oeconomicus. I posit that in real economies there are necessarily three ontological levels, the micro, the meso and the macro.
The micro level is populated by the mereologically simplest adaptive/reflective economic entities, the individual agents, who interact with their environment and with each other within a delimited relational plexus. They possess procedural and bounded rather than substantive rationality that drives their (not always consistent) attempts to 'satisfice' a set of goals subject to numerous informational and computational limitations and fundamental uncertainty. 43 The information-processing and decision-making apparatus they use for this purpose, referred to as schema in cognitive science (Piaget, 1971;Rumelhart, 1980;Gell-Mann, 1994), is a finite internal model of reality coupled with a set of generic, categorical rules, updated in the process of adaptation and learning.
The meso level is populated by mereologically complex entities, namely economic plexuses that consist of micro-entities connected in a stable or quasi-stable division of labour. These are emergent meta-agents 44 equipped with their own information-processing and decisionmaking apparatuses, or 'meta-schemata', which determine their operative modes (Chorafakis, 2013). There are two types of economic plexuses in terms of their internal architecture: (i) Integrated systems with a stable internal division of labour similar to that of multicellular organisms in biology. Emergent entities of this type are organisations, such as corporations, partnerships, associations, public bodies and government agencies. They have legal personalities, hierarchical structures that are robust to perturbations of their microspecifications, and causal powers that allow them to act as unitary agents. Their metaschemata are centralised and emerge from the fusion of the schemata of their subvenient micro-entities, which are fully and irreversibly specialised with fixed in-between links.
(ii) Distributed systems generated by an open and evolving external division of labour similar to that of colonial organisms in biology. Entities of this type are network organisations, 45 industrial clusters and local or global value chains that consist of quasi-integrated (Blois, 1972) business units. Their governance structure is a hybrid between open markets and corporate hierarchies based on contractual relationships (Williamson, 1985;. Their decentralised and collective meta-schemata result from the symbiotic co-adaptation, rather than the fusion, of their subvenient micro-entities, which are quasi-autonomous with relatively stable but flexible links. These collective entities do not act as purposive unitary agents but are adaptive, exhibit emergent properties and possess some causal powers. Both types of economic plexuses behave as CAS. Their non-trivial topologies are shaped by path-dependent and assortative micro-interactions, whose dominant modes are mutualistic and exploitative co-adaptation. The meso level is the locus where the division of labour unfolds, productive relations are formed, and social capital is accumulated. By contrast, under the NRP the absence of direct interactions among individuals and their 'isotropic' coordination through a fictitious apparatus (the Walrasian auctioneer) corresponds to a trivial topology, which becomes obsolete with the heuristic of the representative agent.
The macro level is the locus of complex economic systems (e.g. national economies, intergovernmental organisations such as trade blocs or currency unions, or even the world economy as a whole) that emerge from the interconnection of various types of mesoeconomic plexuses in a multifaceted division of labour. These 'networks of networks' have their own macro-schemata, namely operational macro-regimes with concomitant governance structures and institutional apparatuses (e.g. legal and monetary systems), which engender substantial emergent phenomena (e.g. money and credit, technologies, public goods) with obvious downward causal powers. These macro entities exhibit much greater internal heterogeneity than their subvenient meso entities. Their complex topologies are largely shaped by the asymmetric positional powers of their constituents. As a result, the dominant modes of interaction of their constituents are competitive and exploitative co-adaptation.

The quest for micro-foundations vs the emergentist methods
A phenomenological, as opposed to a deductive-nomological theory is one that establishes nomic connections among observed objects without resorting to 'first principles', i.e. 'fundamental' laws. In a layered world, it is a theory based on the observation of empirical macroregularities without explicit assumptions about subvenient micro-structures. Many robust scientific theories are phenomenological (e.g. classical thermodynamics in physics). Nonetheless, a deep-entrenched belief of the NRP that stems from its positivist epistemological roots is that any economic theory lacking 'micro-foundations' is epistemically inadequate. This belief has driven the attempts to cast Keynes's phenomenological theory on Walrasian micro-foundations 46 through the neoclassical synthesis 47 -a 'bastard Keynesianism', according to Robinson (1973), that extends the protective belt of the NRP by selectively incorporating de-contextualised Keynesian ideas and ignoring the irreconcilable differences between the hard cores of the neoclassical and the Keynesian research programmes. 48 From a methodological perspective the neoclassical quest for micro-foundations is illconceived: It involves the epistemological micro-reduction of an empirically valid macrotheory on the basis of an axiomatic reducing theory that lacks empirical content; and whenever a discrepancy between the reduced and the reducing theories emerges, the latter always takes precedence. The emergentist's approach is radically different: She recognises the legitimacy of the quest for an (empirically valid) micro-theory but also the equivalidity and autonomy of the micro-and macro-theories, and she endeavours to discover a 'trans-46 As Weintraub (1977: 5) observes, 'since Keynes's analysis seemed to require a disequilibrium theory, or a time-intrinsic general equilibrium structure, and since even static general equilibrium analysis was immensely difficult, there was no sound microeconomic system that, when aggregated, yielded Keynesian insights. There was only the negative but useful result that neoclassical value theory was inconsistent with Keynes's General Theory'. 47 The term is attributed to Samuelson (1948), but the ground for this construct is prepared in the early work of Hicks (1936;1937), and its climax is reached in Modigliani (1944). 48 The Keynesian research programme makes no explicit assumptions about underlying behavioural laws: animal spirits, rather than rational calculations, and 'human logic' conditioned by 'mental habits' rather than formal logic (Winslow, 1986) implicitly determine the behaviour of individual economic agents, including entrepreneurs' investment decisions; it lays on implicitly organicist metaphysical premises (ibid.), which are incompatible with atomistic aggregativity; it assumes an economic environment of irreducible uncertainty, which is at odds with perfect foresight, the quantifiable uncertainty of rational expectations, and the ergodic hypothesis; and by assuming non-clearing labour markets it predicates a disequilibrium theory incompatible with ex ante equilibration.
ordinal' connecting theory, which is neither aggregative nor deterministic, considering that emergence in complex systems is a two-stage process that passes through an irreducible meso level. This methodology is adequate as long as it enhances the predictive powers of the theories.
I distinguish two approaches to scientific method that are consistent with the emergentist methodology and differ from both the deductive-nomological (e.g. of general equilibrium) and the inductive-statistical (e.g. of non-Bayesian econometrics) methods: the probabilistic (Farjoun & Machover, 1983), and the generative or constructive (Epstein, 2005;2011;. A paradigmatic application of the probabilistic method in the physical sciences is statistical mechanics -a trans-ordinal theory that successfully connects an empirically robust phenomenological macro-theory (thermodynamics) and the most accurate micro-theory in existence (quantum mechanics). This has recently found manifold applications in economics, and in particular in the fast-growing subfield of econophysics. 49 The generative method derives in an iterative, bottom-up fashion the explanandum -the observed macro-dynamics of a complex system -from the explanans -the micro-interactions of its parts (i.e. individual agents with specific behavioural characteristics) by simulation. This is the opposite of the top-down approach of both the deductive and the inductive methods, which subsume the explanandum under 'general' or 'statistical' laws respectively, and consistent with retroduction -a method of explanation which recursively shifts from observed empirical regularities to their deeper generative causes. 50 It is also consistent with emergence, 51 as it represents more accurately the heterogeneity, the bounded cognitive and informational capacities of economic agents, and the complex dynamics of their anisotropic plexus of interactions. This new approach to 'micro-foundations' is realistic, experimental and open, as opposed to the a priori, axiomatic and closed approach of the NRP. The main instrument of the generative method is agent-based modelling (ABM). Epstein (2011: 42-43) observes that ABM aims to provide 'computational demonstrations that a given microspecification is in fact sufficient to generate a macrostructure of interest', adding that the motto of generative social science is 'if you didn't grow it, you didn't explain its emergence'. 49 Inter alia, Schulz (2003); Aoki & Yoshikawa (2007); Abergel et al. (2017). 50 Retroduction is the principal method of scientific explanation in transcendental realism. Bhaskar (1975), contends that empirical event regularities ('constant conjunctions of events') are neither sufficient nor necessary conditions for a scientific law, and that the latter is independent from the patterns of events it generates. Transcendental realist 'laws' are conditionals "designating the activity of generative mechanisms and structures independently of any particular sequence or pattern of events" (Bhaskar, 1975:14). 51 Confusingly, Epstein (2011) claims that the main instrument of the generative method, agent-based modelling, is incompatible with classical emergentism. The emergence advocated by classical emergentists is, however, of the ontological type, while the generative method is a method of scientific explanation without a priori metaphysical commitments.
The epistemological adequacy of ABM methodology has been challenged on the grounds of the under-determination problem (Windrum et al., 2007), which arises when the same macrostructure can be 'grown' from several different micro-specifications, and hence a given micro-specification may be sufficient but not necessary to generate the macro-structure of interest. 52

Towards a systemic research programme in economics
From a Lakatosian perspective epistemic anomalies lead to the extension of the protective belt of a research programme until it ceases to produce new knowledge and becomes degenerative (Lakatos, 1978). We saw that the hard-core assumptions of the NRP exhibit internal inconsistencies, lead to nomological impossibilities, and support a theory that ignores or fails to explain and predict many real economic phenomena. These are strong indications of degeneracy. In this paper it was argued that one of the main causes of the degeneracy of the NRP is its adherence to strong ontological reduction, and in particular to the belief that economic systems can be fully explained on the basis of a limited set of axiomatically postulated and highly 'stylised' behavioural patterns of atomistic agents. As a corollary, an economic theory or model under this research programme is valid only insofar as it can be derived from the laws that govern the ostensibly more 'fundamental' ontological level of atomistic economic agents, the utilitarian homines oeconomici.
In this paper we saw in passing that some 'heterodox' research programmes in economics, which are incompatible with the hard-core axioms of the NRP are, in principle, consistent with emergence: The Keynesian research programme has an empirically sound corpus of macro-theory that is irreducible to neoclassical 'micro-foundations'. The evolutionary research programme focuses on population-wide selection dynamics, instead of optimising individual choices, that emerge from interactions of heterogeneous, adaptive (rather than representative, hyper-rational) agents. Complexity economics and some strands of econophysics either adopt a non-deterministic view of economic phenomena by modelling micro-behaviour as a stochastic process and assuming entropy-maximising stochastic equilibria, or focus on disequilibrium dynamics, which lend to economic systems some of their most interesting properties, such as their endogenous capacity to generate innovation and structural change.
agents and complex economic systems from a totally different perspective from that of the NRP: It will adopt a layered model of the world, in which higher layers mereologically depend and weakly supervene on lower ones while maintaining their nomological autonomy. In that sense, it will recognise that higher-level entities may exhibit properties that are inexplicable and unpredictable by lower-level laws. Consequently, the derivation of higher-level laws will not require the recourse to lower-level ontologies ('micro-foundations'). This will not preclude the formulation of non-deterministic 'trans-ordinal' theories that connect adjacent layers without eliminating their stratification. Two fertile methods consistent with this approach, and already used in econophysics, complexity and evolutionary economics, are the probabilistic and the generative. Under the SRP, real economies will be modelled as open, dissipative dynamical systems, with a capacity to selforganise and to adapt, to generate endogenous novelty (e.g. in the form of technological change and economic growth), and to be resilient to changes in their micro-specifications but also susceptible to catastrophic phase transitions (e.g. financial crises). These systems will be seen as the outcome of multi-level emergence that articulates the micro level of individual economic agents, the meso level of economic plexuses and the macro level of economic systems. As in all biotic systems, interactions among different types of entities that populate each level will be governed by universal evolutionary processes (notably selection, variation and retention), give rise to complex co-evolutionary dynamics, but also be subject to institutional constraints. Finally, the SRP will not be an empirically void modelling exercise: It will be able to accommodate empirically robust 'phenomenological' macro-theories and to enhance their analytical and predictive powers.