Tokenization and Blockchain Tokens Classification: a morphological framework

The work here presented moves from the acknowledgment that, even if blockchain technology has been around for more than ten years, the knowledge about its economic and business implications is fragmented and heterogeneous. In the first place, it is analyzed the shift from economics to tokenomics and the central role of the token within blockchain-based ecosystems. Subsequently, a generalized definition of the token is proposed. Diving into the requirements for a comprehensive description of tokens, that takes into account their wide variety, a comparative assessment of token classification frameworks available in the literature is performed. This analysis is leveraged to propose a new and comprehensive token classification framework, based on a morphological analysis representation. The proposed framework will be further refined with an empirical and iterative approach in future works.


I. INTRODUCTION
Over a decade has passed since Satoshi Nakamoto published an innovative solution to the double-spending problem [1] based on a peer-to-peer disintermediated network [2]. The whitepaper gave birth to the first blockchain and the first cryptocurrency, the Bitcoin. The following years saw the rise and fall of the Bitcoin price. At the same time, the underlying Distributed Ledger Technology (DLT) has been acknowledged as a foundational technology [3], with a potential impact likely to match or outstrip the revolution brought by the Internet in the Nineties [4], [5].
Even if blockchain technology has been around for a decade, a shared understanding of some fundamental mechanisms is still lacking [6]. In particular, if, on one side, the technical aspects have been thoroughly explored, defined, and branched out into different deployments, on the other, the business and social implications are still blurred.
Keeping aside the technical details (covered by an extensive literature), one of the grounding pillars of the blockchain technology is the possibility to reach a shared consensus on a univocal truth describing the history of states of a digital ecosystem, in the form of a ledger of transactions. A mathematical algorithm mediates the achievement of such a consensus on a unique source of authenticity. The ultimate implication of relying on a consensus algorithm is overcoming the need for a central authority that is entitled to provide a univocal truth. Therefore, the first paradigmatic shift originates from decentralizing the source of truth and cutting off centralized guarantors.
Talking about "univocal truth" and "centralized guarantors" may sound way too theoretical and far from daily life, but it's rather the contrary. The most fundamental aspects of people's lives rely on a centralized source of truth that acts as a guarantor, starting from their identity. The supreme act of one's self-determination needs to be certified by a third-party (such as a state administration) to be generally acknowledged. Countless are the scenarios in which a birth certificate, an ID, or a passport is needed to carry out daily activities, and central government issues all those documents. The same applies to the management of finances: when someone makes a payment, their bank (or credit card issuer) guarantees that the funds on their account cover the expense (or that it is suited to their credit-score).
Tracing back to the roots of money, the vast majority of economic transactions are performed using a currency, in particular, a fiat currency (fiat in Latin means "let it be done"). Fiat currencies do not have an intrinsic value, and they can be used since a central authority (e.g., Central Bank) establishes and maintains its value [7]. The ground-breaking innovation introduced by Satoshi, from a non-technical perspective, is the decentralization of the source of truth in handling currencybased transactions, de facto offering an opportunity to ensure reliable peer-to-peer payments without the need for a system of intermediaries acting as guarantors for the validity of transactions. Decentralization applies throughout the monetary system, and Bitcoin is a decentralized and disintermediated currency that is not issued and managed by central authorities, but instead, it is algorithmically governed.
The path outlined by Satoshi, detouring centralized sources of truth, paved the way to the overall shift of numerous economic activities. In the years following the launch of Bitcoin, a large number of cryptocurrencies emerged, and a diverse cosmos of blockchains has been developed. Furthermore, decentralization has been applied way beyond the mere monetary context, embracing different value-based scenarios, through tokenization.
In such a heterogeneous context, there is the need to shed a light on the actual role that blockchain tokens play in decentralized ecosystems. To address such need, the current paper starts by describing what a tokenization process is and which innovations it brings. Then it moves to the fundamental unit of said process, the token, to distill its nature and to explain the rationale underlying the demand for a comprehensive classification framework. After going through the analysis of the state of the art of token classification approaches described in the literature, the core contribution of this work is presented. It is here proposed a morphological token classification framework that has the ambition to provide a tool for the extensive and complete description of a token, filling the gaps of alternative frameworks today available.

II. THE SHIFT FROM ECONOMICS TO TOKENOMICS
The tokenization process can be described as the encapsulation of value in tradeable units of account, called tokens or coins. The disruptive potential lays in expanding the concept of value that can be partitioned and traded beyond purely economic terms to include, for example, reputation, work, copyright, utility, and voting rights. Once tokenized, all these manifestations of value can be detected, accounted for, and leveraged in the context of a system of incentives and fair wealth and power redistribution.
In other words, tokenization represents a form of digitalization of value and, just like the Internet enabled the free and fast circulation of digitized information, so the blockchain is allowing the "almost free" [8] and borderless flow of digitized value. The blockchain made it possible to algorithmically solve the double-spending problem and introduced the concept of digital scarcity, as opposed to the digital abundance characterizing the Internet [9]. In particular, digital scarcity will act as a key enabler of a new digital economy relying on assets that are liquid, divisible, borderless (easily transportable and quickly transferable), and, unlike currencies, have the potential to appreciate over time. The deflationary nature of some of these assets may have deep implications in helping our society to migrate from a debtbased economy, producing significant improvements in people's lives and the democratic processes.
Once tokenized, every kind of value (in a broad sense) can be managed as a digital asset, whose unit of account is a dedicated virtual token. Such virtual tokens can be minted by any individual or organization that defines the set of rules governing them, such as the token features, the monetary policy, and the users' incentive system. In light of this, the tokenization process can be further described as the creation of a self-governed (tok)economic system, whose rules are programmed by the token issuer.
Here it comes the second paradigmatic shift, namely from economics to tokenomics. In economics, innovation proceeds and propagates by introducing a change in the context of set rules and by observing how such a relatively rigid framework reacts to the change. Therefore, the outcome of the introduced innovation is assessed, at first, on a predictive basis. Conversely, in tokenomics, innovation is put forward by designing the rules governing the playground in a way that the stakeholders' behavior aligns with the goal pursued. In other words, the second paradigmatic shift moves from the passive observation of the ecosystem's reaction to a change to the active design of the ecosystem constituent laws, aimed at reaching the desired outcome.
The tokenization process, as well as the shift to tokenomics, revolve around the token. Without neglecting the importance of the underlying infrastructure and its technical features, a deep understanding of the token nature is fundamental to unleash the disruptive potential of blockchain technology effectively.

III. THE UNDERLYING RATIONALE TO TOKENS CLASSIFICATION
The deep understanding of the nature of the token, and the capability to comprehensively describe it, is as crucial as challenging. Given the cornerstone role played by tokens in a tokenization process, their meaning is intimately related to the very concept of tokenization. Following on this approach, the token definition moves along two strands: on one side, it addresses the function performed by tokens, and, on the other, it drills to the very essence of what they represent.
Looking at the function performed by tokens in economic terms, a token can be described as "a unit of value that an organization creates to self-govern its business model, and empower its users to interact with its products while facilitating the distribution and sharing of rewards and benefits to all its stakeholders" [10]. In simplistic terms, tokens can be seen as privately issued currencies used to exchange value within an ecosystem (e.g., Bitcoin), but, in reality, their usage has gone far beyond the mere currency applications. The roles that a token may play are manifold and, in general terms, a token can be intended as a socio-economic dummy tool to promote the coordination of the actors in a regulated ecosystem towards the pursuit of a network objective function [11], through a set of incentive systems.
Since it is a dummy tool, the token doesn't have an intrinsic and self-standing definition, but its nature is determined by what it represents. Sticking to the concept of tokenization as an encapsulation of value, the token is the representation of such value. Therefore, the deep understanding of the encapsulated value provides the key to distill the essence of tokens.
The value represented by the tokens observable out in the wild is diversified and cannot be described univocally: it can be either the right to have a discount on an exchange rate, or the proof of ownership of a gold ingot, or the reward for solving a mathematical problem to validate the next block of the chain, or many other examples. The challenge is to generalize the definition of token-represented value, by chasing a common trait that all these expressions of value share. Looking deeply into the source of token-encapsulated value, the concept of trust occurs over and over.
When tokens represent the right of the holder, for example, to access a service or benefit from a discount or express a vote, within a regulated ecosystem, the token holder grants trust to the token issuer. In particular, the holder trusts that the right represented and originating from the token holds and is enforceable. Ultimately, the holder trusts the token issuer and its capability to honor the obligation associated with the right represented by the token. When tokens represent an underlying asset (or better a real right -ius in re -on an underlying asset), the token holder trusts that the token issuer ensures the enforceability of the right itself and, at the same time, that the underlying asset is adequately managed and holds (or increases) its value. When tokens are the reward for the block validation activity, their value is the direct representation of the level of trust towards the token holders community and its disintermediated and decentralized consensus, enabled by the underlying blockchain infrastructure. Keeping along this road, it's always possible to bring back the source of value of a token to the concept of trust. Ultimately, it's possible to define tokens as quantifiable representations of decentralized and disintermediated trust.
Many tokens have been conceived and deployed in a relatively short time frame: their flourishing has been chaotic, highly experimental and iterative; their adoption and growth followed an evolutionary pattern with many tokens that didn't survive the process of natural selection. Given the early stage and the broad scope of token-driven innovation, the nature of tokens is still taking shape, and its definition should be approached with a dynamic and iterative mindset. Their highly diverse features and applications pose another challenge in defining the nature of tokens. The extent of this diversity is such to require a comprehensive taxonomy, similarly to what applies to living organisms, resulting from evolution. So far, a relevant number of classifications have been already proposed. Nevertheless, the transition from the initial unruled chaos to an ordered, yet dynamic, taxonomy is still far from being mature and requires a collective effort of scholars, experts, and practitioners.
The goal of this work is to propose a comprehensive approach to tokens classification that accurately maps all the different branches of value representations that tokens can convey. In performing such classification, it is fundamental to keep in mind that trust is core and permeates all the expressions of value brought by every kind of token.

IV. STATE OF THE ART IN TOKENS CLASSIFICATION
As previously mentioned, to reach an overall understanding of the multiform nature of tokens, a comprehensive taxonomy approach is needed. If in general terms tokens can be described as a quantifiable representation of decentralized and disintermediated trust, an operational definition of existing (and future) tokens must include all the diverse shades of trust that they convey. Expanding the definition of tokens needs a systematic and ordered procedure and resorting to the concept of taxonomy, borrowed from the natural sciences, has established itself as a common practice among scholars and experts dealing with this challenge.
A fair number of tokens classifications and taxonomies has been proposed so far. Oliveira et al. [12] present one of the most comprehensive studies on the topic, combining extensive literature research with insights collected through 16 interviews with experts and practitioners. The resulting classification characterizes tokens using 13 different parameters, describing both their technical features and business-related aspects. Furthermore, mapping 18 tokens against those parameters, 8 token archetypes were identified by analyzing recurring features patterns. The work of Oliveira et al. is grounded on several classifications previously proposed, two of which are worth being mentioned. "The Token Classification Framework" presented by Thomas Euler [13] classifies tokens along 5 dimensions: purpose, utility, legal status, underlying value, and technical layer. Such a framework has the goal to look at a single token from different perspectives at the same time, and it is a first tentative to capture and summarize the polyhedric nature of tokens. Another relevant contribution to the collective effort in capturing the nature of tokens is provided by William Mougayar [10], [14]. His framework revolves around 3 classification dimensions (role, features, and purpose) and focuses primarily on the business-related aspects of tokens, with particular reference to their entanglement with the business model of the token issuer and the incentives arising for the token holders.
Looking at the contribution on token classification and taxonomy coming from companies and business practitioners, the "Taxonomy Report on Cryptoassets" by CryptoCompare [15] is an extensive analysis that provides several valuable approaches to the classification of tokens. More than 200 tokens have been examined and classified by means of 30 unique attributes, covering economic, legal, and technological features. In particular, two grouping criteria are of particular interest: Rationale to Possess and Economic Value Drivers. The former classifies tokens according to the main reason that drives token holders to acquire and keep their tokens, while the latter groups them based on the mechanisms underlying their price trend and fluctuations. This is a well-structured approach to the classification of tokens in relation to the incentive system that they convey, and to the behavior that they are intended to induce into token holders.
More recently, in November 2019, the Token Taxonomy Initiative published the first version of the "Token Taxonomy Framework" [16], resulting from a wide cross-industry effort to define a common standard to describe and design tokens. In particular, such a framework [17] aims at introducing not another standard, but rather to create a "metastandard". Its role is to bridge token protocols and platforms to ensure effective interoperability, to lay the foundation of a common understanding of tokens, and to create a shared language that can empower communication among technology experts and business practitioners. Proceeding on first-principles thinking, the Token Taxonomy Framework classifies tokens along with five characteristics: Token Type, Token Unit, Value Type, Representation Type, and Template Type [17], [18]. Those characteristics are intended to describe the nature of a token and the role it plays within a business model. Furthermore, the description approach of the Token Taxonomy Framework is particularly interesting, since it represents each token through a formula combining base types, behaviors, and property sets of the token. This is a modular approach that can reuse and combine token features and, even if it is at an early conceptual stage, it is promising.
From this overview of the most relevant token classifications and taxonomies available in the literature, it results that the approaches adopted are very diverse and the methodology heterogeneous. Therefore, with the goal to perform a comparative assessment of the classification frameworks, a normalization is needed at first.

V. METHODOLOGY
Both tokens and their classifications show a high degree of variety, as previously discussed. To handle this heterogeneity a structured methodology is required and, in the context of this work, General Morphological Analysis (GMA) has been chosen. GMA was developed by Fritz Zwicky [19], [20] as a method to describe and assess problem complexes, characterized by non-quantifiable and multi-dimensional properties, by mapping all the possible relationships, or configurations, occurring in the given problem complex. Such a morphological approach allows to identify recurring patterns and to build non-quantified inference models [21]- [23].
Given its characteristics and previous applications, GMA represents a comprehensive methodology that can be effectively applied in the description of tokens comprising all their non-quantifiable and multi-dimensional properties. In particular, a token classification framework can be represented using a morphological field, resulting from the following these steps: • identify and properly define the dimensions of the problem, i.e. the parameters along which each token can be mapped; • for each dimension of the problem, a spectrum of values must be defined. These values represent the relevant states or conditions that each dimension can assume.
• a morphological field is generated including all the possible combinations of values assumed for each dimension. The number of all the possible configurations of the problem complex is the product of the number of conditions (values) under each parameter (dimension).
The GMA methodology is used in the first place to perform a comparative analysis of the token classification frameworks. In particular, this procedure is leveraged to address the need for normalization previously highlighted, and all the frameworks are traced back to a morphological field representation to make them comparable. Once normalized, the different token classification frameworks are critically analyzed to identify recurring morphological dimensions and values, to identify gaps and eventually uncovered token properties, and to dismiss redundant or misleading definitions.
Moving from such a comparative assessment, a new morphological token classification framework is developed and proposed.

VI. A CLASSIFICATION OF CLASSIFICATIONS
An extensive analysis of both scientific and grey literature has been performed and 8 token classification frameworks have been selected, based on their relevance, comprehensiveness, and role of the respective issuer, considered as a proxy of the potential normative impact. Some of the selected frameworks have been described previously in the state of the art overview. Table I presents a summary of the GMA-normalized token classification frameworks. For each framework, the number of dimensions (i.e. mapping parameters of the token) and the average number of the values that each dimension can assume are reported. It is worth highlighting that the number of dimensions provides a quantification of the multidimensionality of the framework, i.e. the variety of points of view that are concurrently adopted in the description of the token. Furthermore, the average number of values that each dimension can assume is a proxy of the level of detail of the non-quantitative alternatives available for the description of the token. Finally, the number of all the possible configurations (i.e. the product of the number of values under each dimension) represents the overall size of the resulting morphological field.
Analyzing the metrics presented in Table I, it is clear that different authors adopt different approaches in the creation of the respective token classification framework: some give priority to a multi-dimensional outlook (higher number of dimensions), while others limit the description to a small number of parameters but they dive deep into the detailed alternatives for the selected dimensions (higher number of values). In general, the size of the morphological field scales faster with the number of dimensions so, aiming at the most comprehensive framework, multi-dimensionality is to be prioritized over the level of detail.

VII. PROPOSAL OF A MORPHOLOGICAL TOKEN CLASSIFICATION FRAMEWORK
On top of the comparative assessment performed, the GMA-normalized token classification frameworks are also capitalized on to draft the proposal of a new framework.
As a starting point, the cumulative 42 dimensions identified in the 8 frameworks are thoroughly and critically analyzed to select the most meaningful ones and filter a subset of dimensions that is effective and sufficient to describe a token altogether. To structure such a selection process, at first, the dimensions are grouped into 5 significant domains: • Technical domain, including all the technical characteristics of the token, with reference to the level of integration along the technological stack, to the blockchain infrastructure, and protocol; • Behavior domain, including all the inherent functional characteristics of the token, that rule the possible actions that can be performed with the token (capabilities or restrictions); • Inherent Value domain, including the dimensions that describe the economic value of the token itself and, in particular, how this value is originated, which factors influence it and cause price fluctuations; • Coordination domain, including all the token dimensions that enable coordination among the actors of the token-based ecosystem; • Pseudo-archetypes, consisting of token dimensions that anticipate a set of token archetypes, since they implicitly combine different token characteristics. The subdivision of the dimensions among the domains they belong to is functional to streamline the identification of redundant elements, i.e. superimposable dimensions appearing in different frameworks. Furthermore, this approach allows the identification and dismissal of pseudoarchetypes, which implicitly summarize a set of token characteristics and therefore may generate confusion. The resulting selection is composed of 15 dimensions, out of the initial 42.
Afterward, one further dimension, belonging to the technical domain, is added to fill a gap related to the nature of the underlying blockchain, with particular reference to the accessibility to transaction validation, i.e. permissionless vs. permissioned blockchain. Finally, the spectrum of values for each dimension is revised, filling the lacks identified and improving the uniformity and consistency of the terms used. The outcome is the morphological token classification framework presented in Fig. 1.
The proposed framework is characterized by 16 dimensions and more than 41 million possible configurations, creating a very extensive morphological field. The indication of the dimension-grouping domains is retained and can facilitate the navigation of the framework.
As initially stated, the purpose of the token classification framework is to provide guidance in the description of a token, ensuring completeness, consistency, and effective comparability.

VIII. NEXT STEPS: A PROPOSAL FOR TOKEN TAXONOMY
The morphological token classification framework proposed in this work is an initial version, majorly built on top of the literature and the study of a limited number of tokenbased ecosystems. To further refine the proposed framework, an empirical and iterative approach is planned. In particular, the framework will be applied to classify a set of diverse tokens to further validate its applicability and suitability, and to eventually update dimensions and value sets. In particular, the coordination domain will be tested to prove its effectiveness in describing the incentive systems that are deployed in token-based ecosystems.
Afterward, the framework will be used to identify recurring patterns in token descriptions. This is expected to lead to the identification of groups of alike tokens and, ultimately, to the definition of token archetypes. Those will be characterized by a more or less extensive overlapping of the values assumed in different dimensions. Finally, according to the degree of overlapping, a hierarchical arrangement of the identified token archetypes will be performed, resulting in a proper token taxonomy.

ACKNOWLEDGMENT
This work was conducted in the context of the HELIOS Project, a Horizon 2020 initiative funded by the European Commission. Furthermore, this research activity is part of an extensive and ongoing effort to understand and map the innovation potential of Distributed Ledger Technologies (DLTs) carried out by OverTheBlock, a permanent observatory on Blockchain technology powered by LINKS Foundation.