Governance and Assessment of Future Spaces: A Discussion of Some Issues Raised by the Possibilities of Human–Machine Mergers

This article explores potential privacy, security, and ethical issues raised by technologies that allow for human–machine mergers. The focus is on research, development and products at the intersection of robotics, artificial intelligence, Big Data, and smart computing. We suggest that there is a need for a more holistic approach to the assessment of such technology and its governance. We argue that in order to determine how the law will need to respond to this particular future space, it is necessary to understand the full impacts of human–machine mergers on societies and our planet—to go beyond these three aforementioned issues. We aim to encourage further discussion and research on this as well as the broader organism-machine merger question, including on our FLE5SH (F = financial, L = legal, E5 = economic, ethical, equity, environmental, and ecosystem, S = socio-political, H = historical) framework for the governance and assessment of these and other future spaces.


Introduction
Today, it seems we stand at the beginning of an age of ubiquitous computing and attempts to merge the physical, digital, and biological realms. 1 This is also a time increasingly of technological convergence (Kearns 1998: 975;O'Brolcháin et al. 2016;Perakslis et al. 2016), with an ever-larger array of objects having Internet connectivity. All of this poses significant risks for individual and group privacy and security (Weber 2010;European Commission 2013;Global Privacy Enforcement Network 2016; UK Information Commissioner's Office 2016), but it also raises further issues for environmental, human, and animal health, as well as the prospect of unemployment for many as jobs are increasingly automated (Frey et al. 2016;Solon 2016;Williams 2017). Developments in computing technology have drastically altered our world and while there will be some gains from many of these advances, most technologies pose both risks and benefits and are not in themselves neutral. Since there will be both winners and losers, there is a need for a broader assessment of the impact of new technologies on society as a whole, the environment, and the planet.
Developments in Artificial Intelligence (AI) and computing are often viewed as transformative technologies. However, given the potential that future developments in these related fields have to alter our natural and built environment-, impacting not only humans, plants, and animals but also entire ecosystems, there should be a wider debate about not only regulation and assessment of technology, but also the type of world we want to live in. Furthermore, although these technologies are often presented as transformative, developments in these related fields often have limitations (including misinterpreting data and failing to distinguish between things that humans would be able to correctly identify (Jordan 2018;Broussard 2018: chapter 1)) and current advances in AI are best viewed as giving rise to narrow AI (Jordan 2018;Bostrom 2014: 14-16). A September 2019 Royal Society report argues 'Linking human brains to computers using the power of artificial intelligence could enable people to merge the decisionmaking capacity and emotional intelligence of humans with the big data processing power of computers, creating a new and collaborative form of intelligence.' (Royal Society Steering Group on Neural Interface Technologies 2019: 15). This proposed merger of neural interfaces (NI) and AI 'could open the way to game-changing applications ... However, the prospect also raises a number of ethical issues concerning our autonomy, privacy and perception of 'normality'.' (Royal Society Steering Group on Neural Interface Technologies 2019: 49).
There is significant interest and investment in technologies that increase connections between humans and computers. Key here have been developments in: AI (Simonite 2017;Peet and Wilde 2017;Patterson 2017;ACM 2018); machine learning (ML); wearable technology (such as FitBit and Garmin); Virtual Reality (such as Oculus Rift, HTC Vive, Samsung Gear VR, and Neurable) 2 ; implants (such as Northwestern University's tiny antennas (Dormehl 2017;Nan et al. 2017) and Elon Musk's Neuralink venture (Constine 2017a; Lopatto 2019; Statt 2017)); brain to computer interfaces (such as Neurable and Neurovigil's iBrain 3 ), and exoskeletons and bionic limbs (such as Berkeley's Lower Extremity Exoskeleton (BLEEX) and Human Universal Load Carrier (HULC) 4 ). Many of these projects could allow for humans to be augmented, enhanced, and altered. For example, implants could enable extra senses. Meanwhile, bionic limbs and brain-to-computer interfaces could alter capacities and cababilities, which would in turn permit some form of motor and/or thought control (further examples can be found in Royal Society Steering Group on Neural Interface Technologies (2019)).
If successful these aptitudes and facilities could change the very nature of what it means to be human. Some conjecture that humans in their current form will be replaced by a posthuman or transhumanist future (Barfield 2015: 1-20) and that human-machine merger is inevitable (Barfield 2015: 1-2) However, we suggest that this scenario is not a fait accompli and further, if humans are to be re-engineered in this way then the matter must be subject to extensive public debate, scrutiny, and regulatory oversight (for example, recent discussions of such issues in the biosciences can be found in Newmann and Stevens 2019a, b; HEGAAs http:// web.evolb io.mpg.de/HEGAA s/). Broadening our purview, the same arguments apply if we generalize to organismmachine mergers where the term 'organism' encompasses individual or groups of microbes and/or macrobes-entities such as bacteria, fungi, viruses, plants and animals (including humans)-and 'merger' refers to the two components 'working together' so that boundaries between them become blurred. Since coordination and control of such hybrid entities will require tighter coupling of their activities, we anticipate and expect closer interactions between and alignment of the molecular communication (Suda and Nakano 2018;Akyildiz et al. 2019) and organism-machine future spaces. That is, the rise of research and development into technologies inspired by chemical communication within and between the (a)biotic worlds and applications attempting to manipulate human behaviour (Kupferschmidt 2019; Chemical communication in humans 2019; Schmidt et al. 2019) much in the same way as, for example, chemical ecology-inspired bioactive molecules are used for pest management (Beck and Vannette 2016).
As new and emerging technologies combine to facilitate the potential merger of a wide range of corporeal bodies, there will also be differences in the degree to which the intercommunicating components are impacted. For instance, organisms may do all or part of the computing for machines (organism computation), may work together with assistance from computers (computer-supported cooperative work), may live lives intermingled with computer systems (social computing and computer-mediated communication), and may form organism-robot hybrids (cybernetic organisms). Organism-machine mergers are examples of perhaps one of the most important future spaces: systems with physical-biological-digital interfaces at the microscopic-, mesoscopic-, and/or macroscopic-scale. Genetic material (digital sequence information pairings such as the genomes of agricultural crops and livestock (Hammond 2017)) are molecular-level exemplars.
Written from a broadly law perspective, the overall goal herein is to initiate a discourse between policy makers, lawmakers, the general public(s), and industry on how to (1) think 5 about governance and assessment in future spaces) 6 , for existing technologies, and (3) facilitate free access to meaningful information on advances in fields and impacts of technologies to local, regional, national and international stakeholders. This is because whilst our primary focus is developments and ideas that could enable human-machine mergers, we believe it is also necessary to draw attention to advances in fields such as molecular communication, nanotechnology, genome sequencing, CRISPR, and gene drives. For instance, these technologies could allow a wider range of sensors to be featured on clothing or a person's skin and could enable various entities to be implanted into humans, animals, and plants. This could in turn permit (genetic) modification of all or parts of many different life forms, and use micoorganisms to manipulate the complex behaviours of animals (Rohrscheib and Brownlie 2013). Examples include: spinach leaves that have been embedded with carbon nanotubes to detect explosives (Trafton 2016); the implantation of self-destructing nanobots in mice (Gao et al. 2015;Seppala 2015); commercial genetic tests; and gene editing of plants, insects such as mosquitos, and now human embryos (Young 2017;Ma et al. 2017;Sanders 2017). Here, issues such as (bio)security, (bio)safety, and (bio)privacy impacts on ecosystems and consent need to be considered (Reeve et al. 2018; HEGAAs http://web.evolb io.mpg.de/HEGAA s/; African Centre for Biodiversity 2019; Borger 2019). How to ensure that genetically altered organisms-whether or not merged with machines-are not released into the environment accidentally is a vital issue that needs further attention.
These fields are one prong of a trend towards merging the physical (built and/or natural) world with the cyber world. This can also be seen in developments in the Internet of Things (IoT) and in smart computing systems more broadly-for instance, the rise of smart buildings and the connection of critical infrastructure such as the electrical grid to the Internet. Reducing energy consumption and improving efficiency are desirable goals. However, making an entire country's energy supply reliant on the Internet introduces risks and vulnerabilities, such as a large-scale attack disabling the power supply of an entire nation. If power plants, dams, and other infrastructure have not been maintained properly, connecting them to the Internet may not necessarily improve their reliability or security. Hence, programmes aimed at maintaining the physical security of facilities and ensuring the cyber security of industrial control systems are critical and necessary investments (Mo et al. 2012;Hahn et al. 2013;Tuptuk and Hailes 2016).
To a large extent attempts to merge humans with machines depend on a mechanistic perception of both humans and the human brain and of a view of intelligence as computation (O'Connell 2017: 55-6). Developments that link humans with machines by direct means such as implants or brain to computer interfaces raise a number of issues for privacy, security and ethics. These include: how can we ensure the protection of an individual's privacy? Will there be privacy settings for an individual's brain? How can we ensure that an individual has control over their body and mind and is free from manipulation of their thoughts and bodies by third parties? 7 What happens if malware could affect the human brain? How do we ensure security of the human brain and body? How would it affect communication between the brain and the gut Hu et al. 2019)? Normally, before we allow drugs and medical devices to be marketed, they are subject to oversight and pre-market review. How do we ensure that any implant is safe for human, plant, and animal use before it is made widely available?
Further ethical and legal issues include: if there are various forms of humans, some augmented and some not, who will be entitled to the protection of human rights? Could distinctions be made between an augmented human and a robot that did not have a genetic link to the human species? How do we implement consent in the context of brain to computer interfaces or other technologies that enable the human body to be connected to the Internet? How should society address the loss of gainful employment and increased economic inequality produced by robot-and/or computerguided automation? How do we ensure the protection of individual autonomy in this context? For instance, medical law often affords strong protection to the rights of patients to refuse treatment. 8 How will this play out if a government wanted to require its citizens to have microchips, as is already required for animals such as dogs and cats? There are some existing examples of this: SJ, a Swedish train company has introduced implanted microchips for its passengers as a form of biometric train ticket (Coffey 2017) whilst two companies, the Swedish startup Epicenter and the American Three Square Market, have introduced microchips for their employees (Brooks 2017;Grimm 2017;Michael et al. 2017;News.com.au 2017;Sheppard 2017;Solon 2017;Associated Press 2017).
As more of the natural and built environment are connected to the Machine, such issues are amplified. If humans become part of a telecommunication network such as an Internet of Everything, 9 where our thoughts can be read, monitored, and potentially manipulated, then it will be very difficult to turn back the clock. An illustrative example of this point is Facebook's announcement that it wants to develop a brain to computer interface (Constine 2017b;Strickland 2017). It has already emerged that Facebook does monitor what its users type and delete without posting (Sørensen 2016). Imagine if this was not just a matter of typing words on a screen, but a direct link to someone's thoughts. This would potentially reduce privacy in quite a revolutionary way, compounding the challenges we already face with targeting marketing and online behavioural advertising (Duhigg 2012;Lubin 2012;Papadopoulos et al. 2017;Narayanan and Reisman 2017). A well-known example of the ways that businesses can obtain information about customers is that of Target, a company that was able to make predictions about whether a customer was likely to be pregnant based on the purchase of 25 products and then engaged in targeted market with coupons for baby products (Ellenberg 2014). Ensuring security of these types of technologies is a significant challenge that should not be underestimated (Bonaci et al. 2014: 47;Li et al.2015: 663-666). Software based systems are prone to many vulnerabilities and recent research has demonstrated that it was possible to implant malware into synthetic DNA (Ney et al. 2017;Tracy 2017;Greenberg 2017;Timmer 2017).
As technological convergence increases, there is the potential for Big Social Engineering, which raises further questions. These include: How can we ensure transparency about the full functionality of particular technologies? How can we ensure that people have access to information about technologies that may be used to influence them without their consent or knowledge so that they can make informed choices about whether to use particular technologies and reject adoption if they want to? What kind of pre-market review should social engineering technologies be subject to? What rights will people have to their private thoughts? Could there be a privacy setting for a person's brain and what will happen if this is overridden? How can we ensure that existing rights and freedoms are protected? What about security and control? How will 'brain hacking' allow people to be influenced or conditioned to act in particular ways without conscious knowledge of this influence? What are the consequences when applications encourage addiction?
Depictions of AI, cyborgs, and androids from science fiction also exert a significant influence on how many view innovations in these fields, (Calo et al. 2016: 1-22) as well as influencing lawmakers (Walter 2016;Warwick 2016 These depictions may also be influencing inventors and shaping what they expect to develop (perhaps both consciously and unconsciously). They may also be employed in marketing to foster acceptance of (bio)technology.
In this article we seek to draw attention to some of the issues raised by developments in this field and to encourage discussion of not only appropriate regulation, but also technology assessment-a task for which, in previous work, we have proposed the FLE 5 SH (F = financial, L = legal, E 5 = economic, ethical, equity, environmental, and ecosystem, S = socio-political, H = historical) framework (Phillips et al. 2015;Phillips and Mian 2017).
However, we also wish to highlight the more recent proposal by the ETC Group of Global Overview Assessments of Technological Systems (G.O.A.T.S). ETC presented the 'G.O.A.T.S approach to Science, Technology, and Innovation (STI)' Governance at the UN STI Forum in May 2017 (ETC Group 2017a, b: 1). The G.O.A.T.S provides for a 'bottom up 'technology landscaping' project involving multi-actor assessment organised thematically around the 17 SDGs' (Sustainable Development Goals) (ETC Group 2017a). As the ETC Group notes 'Technology is established as a key cross-cutting theme of the 2030 Agenda for Sustainable Development which charts a path to the future for governments, and 13 of the 17 SDGs specify that technological solutions will be necessary to achieve them.' (ETC Group 2017a) This approach can offer a means for 'policymakers, civil society and others to better perceive and navigate the innovation landscape' considering both 'the potential promises and pitfalls' of technologies (ETC Group 2017b: 1-2). We support this approach and our aim with FLE 5 SH is to facilitate a similarly broad multi-dimensional assessment of technologies.

AI and Augmented Humans
Developments in science, technology, engineering, mathematics and medicine (STEMM) promise a tomorrow where 'errors' or 'deficiencies' in an organism's genetic and/or phenotypic makeup can be modulated, enhanced, corrected, redefined or eradicated. A post-human world could be populated by people who have additional senses, such as artificial eyes equipped with video cameras and the ability to feel electromagnetic pulses, enhanced intelligence, and direct connections with computers through a variety of mechanisms including Virtual Reality, Augmented Reality, prosthetics, implants, and other forms of brain to computer interfaces. Such beings may be human on some level and machine on another, but they will not be able to retain privacy (or security) of their own thoughts. In the 2018 American science-fiction dark comedy film 'Sorry to Bother You', workers are made stronger and more obedient by transforming them into 'equisapiens'-a half-human, half-horse hybrid created when a human snorts a genemodifying powder (Riley 2018).
Already, there are a number of products and services on the market that are part of the Quantified Self movement. These range from direct-to-consumer genetic tests through wearable fitness monitors such as FitBit and Garmin to FashTech which incorporates sensors into clothing, examples are heart rate monitoring bras such as the Mi Pulse Smart Bra and the Vitali Everyday Smart Bra. 10 Some of these devices have already begun to be used in the courtroom (Chauriye 2016;Jackson et al. 2017 Hilts et al. 2016;Norwegian Consumer Council 2017) has demonstrated that a number of such devices (including fitness bands and smart watches) are prone to security vulnerabilities and that it is possible to create a false fitness record on some devices. This is a significant issue if such devices are to be relied upon as evidence in the courtroom. Furthermore, as more forms of personal information are collected and linked, there is an increasing risk to informational privacy for individuals and their families (Drabiak 2017).
There is also growing concern and debate around technology design and specifically the issue of technologies being designed to be addictive, as well as the impact of the use of screens on children and young people. 11 Germany has banned the sale of smartwatches to children and it is possible that other countries will also begin to restrict the sale of certain products to young people (Johnsen 2016;Wakefield 2017;O'Brien 2018).
In recent years there has been increasing interest in the idea of approaching technological singularity or as Nick Bostrom terms it, an intelligence explosion (Bostrom 2014: 4;62-77). The basic premise here is centred around creating human level machine intelligence. Once this is achieved the suggestion is that AI will improve itself and quickly surpass human intelligence. Bostrom defines super intelligence as 'any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.' (Bostrom 2014: 22-23). His work is timely and of great value to this discussion. The book concludes with an analogy of the development of super intelligence with a child playing with a bomb (Bostrom 2014: 260-61), a very useful starting point to highlight the importance of paying sufficient attention to getting this right.
It is also important to understand that the development of a super intelligent AI is not at present a forgone conclusion, although a number of experts do view it as likely. However, if this does come to pass, it does not necessitate that all humans must be augmented and merged with machines. These are separate issues that are both in need of further attention. There is growing attention and concern over the safe development of AI technology. The letter calling for a ban on lethal autonomous weapons released at the International Joint Conference on Artificial Intelligence (IJCAI 2017) is an example (Vincent 2017; Future of Life Institute 2017a, b). Others include the 'Partnership on AI' 12 and 'A Unified Framework of Five Principles for AI in Society' (Floridi and Cowls 2019).
We cannot predict what the interests of a super intelligent AI will be and we support the calls for more discussion and oversight of this area. Recent research from Google's DeepMind has shown that AI can behave both collaboratively and in more aggressive ways (Burgess 2017;Leibo et al. 2017; International Foundation for Autonomous Agents and Multiagent Systems 2017). Since AI may behave unpredictably and before we get to the advent of a super intelligent AI, it is vital that we understand more about how less advanced AI operate and what their interests could be. There is a growing literature, particularly in the context of autonomous vehicles (Bradshaw-Martin and Easton 2014; Bonnefon et al. 2015;Etzioni and Etzioni 2016), about the need for coding in human values into AI systems. Although this seems advisable, since humans do not always share all values (Mignolo 2010(Mignolo , 2013Grosfoguel 2012) perhaps one option is a requirement for some form of balancing and explanation, which could assist AI to make decisions contextually, allowing for consideration of a number of factors. An example from science fiction can demonstrate this point. In Arthur C Clarke's 2001: A Space Odyssey, Hal is taught to lie, cheat, and deceive humans. Hal's abilities are linked closely with the achievement of particular goals, in this case the completion of Hal's mission (Clarke 2016). However, much of what he is designed to do is not balanced out by explanation. The point here is that in order for AI and humans to work together successfully, AI will need to understand human motivations and the reasons we do or do not behave in certain ways. Such understanding could help to avoid AI deciding to do something that could result directly or indirectly in human extinction.
However, our concern here is also to highlight the significance of developments that allow for humans to be revamped, so that they are cyber-physical and for instance, elements of the Internet of Bio-Nano Things (Akyildiz et al. 2019). While there should be discussion and oversight of AI, implants, and brain to computer interfaces, other products also need attention. Since AI systems that have understanding of human motivations and emotions might be useful in developments that merge humans and machines, the Precautionary Principle could assist or be invoked here. It also seems advisable to look at existing governance mechanisms that have regulated medical devices and pharmaceutical drugs. While such systems are imperfect, they could be helpful in thinking further about governance of implants and brain to computer interfaces. Generally, it would seem wise to ensure the safety of such products before implanting them into people.
Ideally, we do not want the occurrence of super intelligent AI to be made by a lone individual-be it a lay person or researcher-in their basement or (computer) laboratory. Likewise, while there is a DIY biohacking movement already (Barfield 2015: 135-176;Bradley-Munn and Michael 2016;Mallonee 2017) and it is true that some individuals want to alter their bodies in new ways, this is also something that does need more oversight. Furthermore, the addition of new senses, different forms of implants, and brain to computer interfaces is not something that should be forced on people without their consent.

Technology Assessment: Time for a Holistic Approach?
A variety of technologies, such as smartphones, laptops, tablets, wearables, as well as a burgeoning range of other devices which form the IoT are now accessible and used by a significant portion of the world's population. For example Facebook now exceeds 2 billion monthly active users, (Constine 2017c) Apple has sold more than 1.2 billion iPhones, (Morris 2017) and the number of mobile phone users will likely exceed 5 billion in 2019 (Statistica 2017). However, while technological solutions are often promoted as a means to solve many of the planet's problems, much of this high technology consumer culture involves products that are not made to last, but to be replaced on a regular basis, which is depleting resources and also places burdens on resource, particularly energy, consumption (Vince 2012;Mian et al. 2019). An interesting initiative to combat this throwaway culture is the Swedish Government's introduction of tax breaks for the repair of common consumer products, including clothing, bicycles, and washing machines (Starritt 2016).
Many of these technologies involve the collection, storage, transmission, and sharing of a variety of forms of information, which can include personal information, and sensitive information, including health, and genetic information. There is growing use of cross-device and cross-platform tracking, which attempts to harvest more information from individuals based on their purchasing behaviour, as businesses seek to identify whether viewing a particular advertisement results in the purchase of their products or services (Chen et al. 2016;Federal Trade Commission 2017;Brookman et al. 2017).
There are now a growing variety of impact assessments that are either encouraged or required by law. These include: privacy impact assessments; sustainability impact assessment; environmental impact assessments; and ethical trade impact assessments. One example is that of data protection impact assessments, which are required in article 35 of European Union's General Data Protection Regulation. These are to be carried out: Where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data. 13 If we think about this in the context of technologies, such as implants or brain to computer interfaces, it is likely that such technologies would be caught by this requirement.
Our recently proposed FLE 5 SH framework provides a new approach to help organize, interpret and assess past, extant, emerging and new research and development in STEMM (Phillips and Mian 2017;Phillips et al. 2015). The nine lenses in this framework provide a more holistic approach to technology assessment and regulation. We are including such a broad range of lenses because we believe that many if not all technologies need to be assessed from as wide a perspective as possible.
To some extent the FLE 5 SH framework can be seen as allowing the formation of a social contract, whereby all stakeholders are required to engage in a review of this wider spectrum of the possible impacts of technologies. Where risks are seen as likely, imminent or serious then this may (or probably should) trigger application of the Precautionary Principle.
There is growing interest in central banks maintaining financial stability, 14 together with interest in ethical investment in sectors such as pension funds. Consequently, looking at digital technology, such as distributed ledger technology (for example, cryptocurrencies and smart contracts), in the round can help give a more balanced picture of the respective benefits, risks, and challenges raised by a specific technology (Wolbring 2009;ETC Group 2011Daño et al. 2013) such as Bitcoin or Ethereum (Reijers et al. 2016;Zimmer 2017). In order to have a more holistic assessment of technology, we advocate for a broad dialogue amongst all stakeholders, including the public, and especially groups that have historically been marginalized, such as Indigenous Peoples.
Taking a more holistic approach also allows for consideration of the relationship between technology and Nature and its impact on Nature. Here we are thinking about not only humans, but also organisms and the rights of Nature. The granting of forms of legal personhood and human rights for the protection of rivers in New Zealand and Ecuador are illuminating examples. 15 Our proposed approach aims to assess the interactions amongst and between components of all Earth systems: the lithosphere, atmosphere, hydrosphere and biosphere. The FLE 5 SH framework provides a common toolbox that diverse stakeholders-researchers, policymakers, regional and national social movements, civil society organisations, and others-can use to evaluate technologies and if warranted, to choose a different future.
At present, many products and services are coming to market without pre-market review and without comprehensive impact assessments. Regulators have generally held back and there is a general tendency to let the market decide and promote industry self-regulation. The law may have a history of struggling to keep up with technological progress, but we should not accept this as a permanent state of affairs that stops discussion of appropriate regulation and accountability. Unforeseen harms can occur if there is no incentive for a company to behave responsibly other than loss of reputation. Fines for violating laws may be regarded as a cost of doing business.
In relation to discussion of technology assessment, we suggest utilising the Precautionary Principle. This principle has been invoked in the context of environmental policy, as well as in the context of public health. It is an important principle in International Environmental Law and is set out in the Rio Declaration on Environment and Development (1992). Principle 15 of the Rio Declaration: In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.

(Rio Declaration on Environment and Development 1992)
It is also set out in article 191 of the Treaty on the Functioning of the European Union (Treaty on the Functioning of the European Union 2007).
A useful depiction of when the Precautionary Principle ought to be relied upon stems from the Consensus Statement on the Precautionary Principle developed by the Wingspread Conference on the Precautionary Principle held in 1998 provides that: When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically. In this context the proponent of an activity, rather than the public, should bear the burden of proof. (Science and Environmental Health Network 1998;Kriebel et al. 2001: 871) The Consensus Statement further suggests that: The process of applying the Precautionary Principle must be open, informed and democratic and must include potentially affected parties. It must also involve an examination of the full range of alternatives, including no action. (Science and Environmental Health Network 1998) Once the Principle is triggered in relation to a particular technology, when more scientific information becomes available that would enable for assessment, the situation should then be reviewed. 16 While the Precautionary Principle has often been invoked in the context of environmental protection, as Som et al. (2009) suggest, it can also be applied to social subjects and in thinking about potential frameworks for an information society that is sustainable (Som et al. 2009;Danaher 2016). We suggest the need for invoking this Principle in the context of consideration of whether to adopt these new technologies. Although smart infrastructure has been promoted as facilitating the development of more sustainable, cost effective, and efficient cities, connecting things such as energy, water and monetary supply chains to the Internet renders them vulnerable to physical and cyber attacks (Taylor 2015).

Why is a Historical Lens Necessary?
Recently, IEEE released a report on ethical implementation of autonomous and intelligent systems (A/IS) A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (IEEE 2019). Thus, whilst STEMM students entering a digital society require a firm foundation in discipline-related matters, they ought to be conversant also in non-technical issues. Given this need for students to become more multilingual but in light of an already highly constrained curriculum, how can the thorny transition from fluency in STEMM to expressivity in SHTEAMM be made (S: Science, H: Humanities, T: Technology, E: Engineering, A: Arts, M: Mathematics, M: Medicine)? Consider the following exemplar. Direct-to-consumer genetic testing (DTC, aka personal genomics) is a way a person can access information about their genome from their home (National Human Genome Research Institute https ://www.genom e.gov/dna-day/15-for-15/direc t-to-consu mer-genom ic-testi ng). Uses of AI/ML in this space range from calling of genetic variants from high-throughput DNA sequencing data (for example, DeepVariant (Google Cloud 2019)) through secure storage and sharing of genomic data (Mittos et al. 2019) (for example, differential privacy (Page et al. 2018)) to developing apps for consumers to 'interact and experience DNA-powered insights' about, for example, health, ancestry, genetic relatedness, athletic ability, child talent, and infidelity (Phillips 2017(Phillips , 2019. The DTC market is predicted to exceed $2.5 billion by 2024 (MarketWatch 2019).
In January 2019, the UK's Health Secretary Matt Hancock announced plans to offer healthy people the option to have their whole genome sequenced by the NHS for a fee with these 'genomics volunteers'-if they share datareceiving a personalized health report (Semsarian 2019). In March 2019, the House of Commons Science and Technology Committee launched an inquiry into commercial genomic testing to establish what safeguards need to be put in place to protect those who get tested (Science and Technology 2019). The Committee should be releasing recommendations later in 2019. One of the authors of the present article, Andelka Phillips, is using the example of the DTC industry's use wrap contracts as their dominant means of governance to illustrate the challenges disruptive technologies pose for societies and for regulation. She has raised significant questions (Bates 2018) about whether the services are: fit for their claimed purposes, the genetic data and other personal information collected are being stored securely, sufficient protection for privacy is provided, companies are sufficiently transparent in their claims about benefits and limitations of their services, and consumers actually understand the contracts they enter into when purchasing these tests. Beyond technical and legal solutions to such problems lie other concerns, notably the issue of biological and genetic determinism.
Dubbed '21st Century genetically informed social science', sociogenomics aims to understand the roots of complex behavior (Braudt 2018). For instance, genoeconomics posits that economic outcomes and preferences are about as heritable as many medical conditions and personality traits (Comfort 2018a). It suggests that financial behaviour can be traced to a person's DNA so 'genetics could someday be used to build not just personalized medicine, but personalized policy that takes into account the genotypes that influence whether you and I are receptive to certain methods of instruction, or punishment, or therapy' (Ward 2018). Sociogenomics has been characterized also as opening a new door to eugenics, new ways 'genetic data could bolster scientific racism and encourage discrimination' (Comfort 2018b). The Victorian scientist Francis Galton coined the term eugenics (eu = good or true + genus = birth, race or stock) to describe the betterment of the overall quality of the gene pool (Das 2015(Das , 2017. In his Anthropometric Laboratory-established in 1883 (Boulter 2017) at the International Health Exhibition in South Kensington, London-he measured, recorded and evaluated the mental abilities and physical characteristics of ∼ 10,000 people over a year (Das 2015). Galton's historical connections to and the eugenics movement he initiated have forced University College London (UCL) to confront its past (Bartlett 2018). December 2018 saw the launch of a Commission of Inquiry into the History of Eugenics at UCL (UCL 2018). One key issue is the university's role in teaching and researching eugenics in the past, present and future (Osei-Mensah 2019).
The United Kingdom Research and Innovation (UKRI) 17 expects all researchers and their research organizations to commit to an approach that seeks continuously to 'Anticipate, Reflect, Engage and Act' (AREA). This approach will: Anticipate-describing and analyzing the impacts, intended or otherwise, (for example economic, social, environmental) that might arise. This does not seek to predict but rather to support an exploration of possible impacts and implications that may otherwise remain uncovered and little discussed. Reflect-reflecting on the purposes of, motivations for and potential implications of the research, and the associated uncertainties, areas of ignorance, assumptions, framings, questions, dilemmas and social transformations these may bring. Engage-opening up such visions, impacts and questioning to broader deliberation, dialogue, engagement and debate in an inclusive way. Act-using these processes to influence the direction and trajectory of the research and innovation process itself (EPSRC 2019).
The UCL Inquiry and the AREA framework are relevant to AI/ML and their applications in human health and agriculture (Marr 2018). A shift from genetics to genomics-the study of organisms in terms of their full DNA sequences-at the turn of the 21 century is said to have given rise to a new form of eugenics, eugenomics (Aultman 2006). A move from 'personalized medicine' to 'precision health' and 'wellness genomics' during this period has raised the question of whether a century from now, the latter two ideas will be viewed as eugenics is today (Jeungst et al. 2018). Might a similar fate await 'public health genomics' and 'precision public health', programmes such as whole genome sequencing on every newborn within a population (Molster 2018)? The HUMAN Project 18 (Human Understanding Through Measurement and Analytics)-introduced in 2015, aims to measure, aggregate and analyse the biology, behaviour, environmental conditions and events of ∼ 10,000 New Yorkers over 20 years (Azmak et al. 2015). This Project uses medical records, biological samples, surveys, questionaires, digital device data, third party data and other modalities in order to create 'synoptic and granular views of how human health and behavior coevolve over the life cycle and why they evolve differently for different people' (Azmak et al. 2015). Thus, it is important to understand Big Science and Big Data studies of humanity in Victorian and modern times-anthropometry (UCL Culture https ://www.ucl. ac.uk/cultu re/galto n-colle ction /galto n-and-anthr opome trics ) yesterday, today and tomorrow-and more generally, the past, present and future of datafication (the quantification of objects, actions, processes and other aspects of life and the world that previously were experienced or existed only in qualitative, non-numeric form).

Conclusion
It is hoped that this article will stimulate reflection about the following matters: the need to engage in a more public, democratic, and open discussion of technologies and their potential impact on society, the environment, and the planet; the need for greater oversight of technologies that pose significant risks to human and/or environmental health; the need to ensure that technologies that allow for the alteration of the genetic makeup of biological organisms are subject to oversight, especially regarding their safety; and the need for the development of appropriate laws and governance mechanisms that will protect the public, the environment, and the planet as a whole.
It should be noted that we have developed bodies of law such as consumer protection and product liability law for sound reasons. Permitting commercialisation of technologies without any regulation other than industry-self regulation is unlikely to lead to a safer, fairer world.
The issues raised both by developments in AI which could lead to a super intelligent AI and other fields that could lead to the merging of humans with machines raise issues that need to be considered from a range of perspectives. If the future is Humanity 2.0 (or higher) then this should be a choice that humans make, just as if super intelligent AI is to develop, we do need to ensure that its values are in line with those of humanity and the planet. However, there is a pluriversal and not just a universal notion of what constitutes value (Mignolo 2013). Perhaps, a more holistic approach to assessing technology could also serve to guide policy contextually, as a substitute for humanity's conscience, and thereby shape technology in a consistent and more balanced way.