Artificial Intelligence and Big Data in the Age of COVID-19

: The view that the COVID-19 pandemic has set in motion profound changes in our modern societies is practically unanimous. The global effort to contain, cure, and eradicate COVID-19 has been greatly benefited by the use, development and/or adaptation of technological tools for mass surveillance based on artificial intelligence and robotics systems. The management of the COVID-19 pandemic yet has also revealed many shortcomings generated from the need to make decisions “in extremis”. Systematic lockdowns of entire populations pushed humans to increase exposure to digital devices in order to achieve some sort of social connection. Some nations with the capable technology development used AI systems to access individual digital data in order to control and contain the SARS-CoV-2. Massive surveillance of entire populations is now possible. In this way, the problem arises of how to establish an adequate balance and control between the utility and the results offered by mass surveillance systems based on artificial intelligence and robotics in the fight against COVID-19 on the one hand, and the protection of personal and collective fundamental rights and freedoms, on the other.


Introduction
Breakthroughs in technological developments in the last decade triggered a digital revolution with palpable consequences in our daily lives. The World Wide Web became a colossal source of data storage that sees and remembers everything. Humans' dependency on technology devices is so profound that fusion with machines becomes inevitable. This fusion turns into a footprint trail that everyone of us leaves behind, most of the time, unnoticingly. Smartphones, Internet of things, transit and transportation automatization, e-commerce, all this has made us bionic humans. Some may even say cyborgs. A milestone in triggering or speeding up the digital revolution was the disruption of Artificial Intelligence (AI)'s deep learning, i.e., the possibility to create a software capable to learn like a human but much faster and more effectively. AI became of widespread use in nearly all parts of our daily life activities, and by the great majority of governmental agencies and private companies.
The COVID-19 pandemic accelerated human fusion with machines. Systematic lockdowns of entire populations pushed humans to increased exposure to digital devices in order to achieve some sort of social connection. Some nations with the capable technology development used AI systems to access individual digital data in order to control and contain the SARS-CoV-2. Massive surveillance of entire populations is now possible. The possibility for people's massive surveillance via AI raises profound questions regarding the rule of law, democracy and human rights. A new state-police model based on massive surveillance challenges the very roots of our criminal law systems, eroding the distinction between criminal offences and antisocial behavior.
Building on the first introduction of a constant predicament between dignity in privacy and value in information sharing, this article applies an ethical and legal analytic frame to debrief on pandemic emergencies and global health as unprecedented challenges for modern humankind. Human rights are a powerful tool we can use to control people's massive surveillance. However, existing universal and domestic legal standards seem inadequate to address the complexities of the digital revolution. New regulation approaches become necessary. We need to develop new rights protection measures enforceable according to the characteristics of this new digital era. Market solutions -such as monitoring and tracking of medical devices and search engine results -coupled with anti-discrimination efforts in the medical tracking domain, appear necessary to imbue an ethical notion in capitalism in the healthcare domain in the digital age. The most novel trend of COVID-19 Long Haulers sharing information freely online as a quick remedy when clinics all over the world are overflowing points of contact with potential diseases is discussed in light of privacy concerns and online manipulation threats of a particularly vulnerable populations.

AI Systems and the digital footprint trail
Artificial Intelligence (AI) allows the development of computer systems capable of emulating and carrying out activities typical of human beings, such as perceiving, reasoning, learning and solving problems. The aim of an AI system is to perform tasks or solve problems with results similar or superior to human qualifications and performance (Independent High-Level Expert Group on Artificial Intelligence 2019). From social aspects, such as solving legal conflicts, operating the financial system, monitoring human activity, or navigating cars, ships or aircraft, to individual aspects such as financial management councils or music, film or video selection, the AI systems that coexist in our societies can perform different tasks or functions that were once carried out only by human beings (De Asís Roig 2018). AI has transpired all sorts and tasks of modern life (Chace 2016;Kasparov 2018). Contrary to previous mechanical revolutions, the AI revolution nowadays competes with human over cognitive tasks (Harari 2018).
A great qualitative and quantitative leap in the development and use of algorithms has occurred in the last two decades, in which computers have exponentially multiplied their capacity to process data. The total interconnection was exacerbated through the Internet, which allows the constant collection of massive amounts of data, which get amalgamated into 'big data' (Hawkins 2018). Currently algorithms have already achieved the unthinkable learn-and-improve capacities of human minds, which is referred to as "machine learning" (Domingos 2018).
With the rise of Internet companies, such as Facebook and Twitter, and the popularity of smart devices, we have become familiar with constant posting and leaving a trace of our relationship statuses, comments, preferences and locations. All this information is collected in real-time and stored as data that can then be analyzed. Because we can discover valuable insights from such data, we are likely to see the trend continue, with innovations in capturing data from sources we had not previously thought of as information trackers (Mayer-Schönberger & Cukier 2013). With the advent and massification of so-called social networks -personal portals where people constantly register and consult information from their social environments -big data became one of the most valuable assets in the modern world. This trend is part of the process of constant datafication -capturing information about the world as data.
The availability and use of big data have also generated notable repercussions in recent years (Bartlett 2018;Smith & Telang 2017;Stephens-Davidowitz 2018;Frank, Roehring & Pring 2017). Probably the first clue to the existence of big data were so-called Internet searches. Suddenly the entire world began to use the search engine en masse, imposing a verb to describe this action in 'to google' something. Now googling, beyond its usefulness, leaves a trace of our digital activity. That trace can then be associated with a specific person or be subject to transformation into information of marketable value, such as to know his/her wishes or preferences right-on-time (Gilder 2018). The modern "AI system" is referred to as selfsufficient programs made up of a complex web of algorithms that are constantly fed by big data. Algorithms are the DNA of AI systems, and big data is the energy that enables them to grow, develop and perform.

Massive Surveillance in the age of COVID-19
The novel Coronavirus COVID-19 started at the end of 2019, when it was first diagnosed in China. To this day, there are over 230 million reported infections with COVID-19 and almost 5 million deaths reported around the world (Worldometer 2021).
In regards to AI and big data, the COVID-19 pandemic has set in motion profound changes in our modern societies. The high contagion and rapid transmission of the pandemic has forced us to make decisions and implement substantial lifestyle shifts. Global priorities in the fight against COVID-19 have been mainly focused on containing the epidemic, protecting and curing the sick but also developing biotechnology to combat, and preferably eradicate, this deadly virus.
The global effort to contain, cure, and eradicate COVID-19 has been greatly benefited by the use, development and/or adaptation of technological tools for mass surveillance based on AI and big data insights. To a greater or lesser extent, all developed nations have declared health emergencies affecting the freedom of movement of people and products, and also, have implemented, in parallel, different mass surveillance tools to carry out the required social control.
AI systems in our daily lives can have serious consequences for social equality, democracy and even in the very nature of our species. In an article published in the Financial Times in 2020, Yuval Harari, highlights about the COVID-19 crisis "that the storm will pass, but the decisions we make now can change our lives for years to come," when considering the massive surveillance systems in place through smartphones and facial recognition cameras that constantly track and monitor society but also the new devices that are used to report body temperature and health condition real-time online. Through these tools, we now not only have the opportunity to quickly identify possible carriers of the virus but also monitor peoples' movements and possible infections but also their interactions, behavior and health status all time and all around. Harari (2020) points out the deployment of mass surveillance tools in countries that have so far rejected them, which represents a dramatic surveillance transition worldwide.
Not only government entities are using the power of big data driven surveillance. Shoshana Zuboff (2019) describes in The Age of Surveillance Capitalism how private businesses are collecting information about all aspects of the human experience that are turned into data and sold to a variety of businesses for a variety of reasons. Zuboff (2019) also warns that surveillance capitalists hope to identify key moments of sensitivity in order to increase the chances of purchase and modify behavior in line with behaviorism efforts.
AI was originally designed to be intelligent, but -as Stuart Russel (2020) concludes -the way we currently designing AI is not necessarily in humanity's best interests. The wish remains to control super-intelligent AI and harness its immense power to advance our civilization and not losing our autonomy at the whims of superior intelligence. We are also at the risk of losing humanity and certain human aspects that are not replicable by AI -such as leadership, empathy or creativity -when relying heavily on AI (Cremer 2020;Du Sautoy 2019). Effective control methods should be put in place on time (Domingos 2018;Gilder 2018;Hawking 2018).

Unregulated space and ethical predicaments of massive surveillance
Despite multiple effects, there are two core pillars of major legal impact on individual rights by AI systems and big data insights in the domain of equality and privacy. Equality gets infringed upon by prejudices arising from big data. Privacy concerns appear if personal information as a private property gets reaped by big data insights generating entities (Bariffi 2021).
Jammie Bartlett (2018) cautions that digital technology has brought undeniable benefits to humanity, but it also poses equally indisputable challenges to democracy -ranging from biases to totalitarian abuse of power. Problematic appears that algorithms were initially created to be neutral and fair by avoiding all-too-human biases and faulty logic. However, many of the algorithms used today, from the insurance market to the justice system, have incorporated the very prejudices and misconceptions of their designers. And since these algorithms operate on a massive scale, these biases lead to multiples of unfair decisions (O'Neill 2018). The impact of AI systems and big data mining companies on individual rights reveals discrimination in detriment to socially vulnerable groups such as gender, race, immigration status, or disability Economics is concerned about utility. As one of the foundations of economics, utility theory captures people's preferences or values. The preference for communication is inherent in human beings as a distinct feature of humanity. Leaving a written legacy that can inform many generations to come is a humane-unique advancement of society. At the same time, however, privacy is a core human right and brings value to personal relations. People choose what information to share with whom and like to protect some parts of their selves. Protecting people's privacy is a codified virtue around the globe grounded in the wish to uphold individual dignity .
In the age of instant communication and social media big data storage and computational power; the need for understanding people's trade-off between utility in communication and dignity in privacy has leveraged to unprecedented momentum (Puaschunder 2019b). Today enormous data storage capacities and computational power in the e-big data era have created unforeseen opportunities for big data hoarding corporations to reap hidden benefits from individual's information sharing, which occurs bit-by-bit in small tranches over time (Puaschunder 2019a).
Behavioral economics describes human decision-making fallibility over time but has -to this day -not covered the problem of individuals' decision to share information about themselves in tranches on social media while big data administrators are able to reap a benefit from putting data together over time and reflecting the individual's information in relation to the big data of others (Puaschunder 2017a, b;. The decision-making fallibility inherent in individuals having problems understanding the future impact of their current information sharing is introduced as hyper-hyperbolic discounting decision-making predicament (Puaschunder 2017a, b, c;2019a, b).
Individuals lose control over their data without knowing what surplus value big data moguls can reap from the social media consumer-workers' information sharing, what information can be complied over time and what information this data can provide in relation to the general public's data in drawing inferences about the innocent individual information sharer (Puaschunder 2017a, b, c). In recent decades, big data derived personality cues have started been used for governance control purposes, such as border protection and tax compliance surveillance .
The COVID-19 healthcare crisis and pandemic emergency around the world has exacerbated governmental control of data for monitoring, tracking and preventive prediction purposes (Gelter & Puaschunder 2021;Puaschunder forthcoming;2020a, b).
A growing body of contemporary findings reveals that an estimated 10-30% of those previously infected with COVID-19 face some kind of long-term health impact and/or chronic debilitation that in many cases comes and goes in waves (Hart 2021). These so-called COVID Long Haulers are estimated to account for up to 1.9 billion people worldwide after the end of the pandemic. Given the large number of possible COVID Long Haulers, it is certain that this health phenomenon will have an enormous impact on society, medicine, the economy, the law and governance of our world (Puaschunder & Gelter forthcoming).
Long Haulers will contribute to the ongoing digitalization revolution by taking advantage of real-time health status and environmental infection condition tracking. As digitalization allowed for remote learning, working and entertainment, the biggest deurbanization trend in US history emerged in the wake of COVID-19 (Puaschunder 2020a, b;Puaschunder & Gelter forthcoming). Labor market shortages and the number of workers quitting skyrocketing these days will likely further amplify a digitalization revolution to replace human contact and lowskilled labor (Puaschunder 2020a, b). AI, robotics and big data insights come in handy when filling gaps for Long Haulers, who often face waves of debilitating conditions (Puaschunder & Gelter forthcoming). Robotics aid on patient care and hygiene (Puaschunder 2019c). Big data analytics have already revealed ground-breaking COVID long haul insights that will likely lead the way forward to finding remedies for those in chronic pain (Puaschunder & Gelter forthcoming).
Moreover, Long Haulers have already found themselves in online self-help groups for quick and unbureaucratic information exchange about an emerging societal phenomenon (Puaschunder & Gelter forthcoming). Accounting for the nature, size and scope of the tragedy of Long COVID creates the imperative to protect the most vulnerable populations online when sharing sensitive information about their health and well-being in social media forums online from being turned against them. Nowadays COVID long-haul patients have become -more than ever before -citizen scientists that bundle decentralized information on their health status and potential remedies in order to inform the medical profession about newly emerging trends. The rise in medical self-help and mutual support will have profound implications for the regulation of the medical profession and will likely stretch the medical remedy spectrum and boost alternative medicine.
Instant online exchange of sensitive information about one's health status makes citizen scientists particularly vulnerable in terms of their privacy and potentially susceptible to online marketing campaigns under medically impaired conditions. But also, the long-term impact of publicly disclosed sensitive information that is shared bit-by-bit online over time appears sensitive. In the digital age, it is difficult to estimate what effects the piecemeal providing of private information will have over time, when, for example, personal health information disseminated in an internet forum is absorbed into large datasets. If information is analyzed and displayed in relation to other individuals' performance, a combined dataset could open gates for discrimination and stigmatization.
In the online exchange of sensitive information about one's health status, COVID Long Haulers, who recently have been recognized as potentially disabled group, are also in particular vulnerable in terms of their privacy, potentially susceptible to online marketing campaigns under medically impaired conditions, but also because of their sensitive information being publicly disclosed online over time (The White House of the United States 2021). This online sharing of medical information raises important -but yet hardly described -concerns about privacy, susceptibility to misinformation and discrimination in the vastly unregulated online social media arena, which calls for urgent attention.

Policy implications
As often happens when technological advances affect daily life without sufficient time for their legal regulation, a series of general principles and guidelines of a non-binding nature have developed in recent years, mainly through consensus at both regional and international levels (Bariffi 2021).
For instance, the 2018 Toronto Declaration of 2018 established three fundamental premises. First, that the ethics of AI and how to make technology in this field human-centric must be analyzed through a human rights lens. Second, that when developing AI, states (public and private actors) must consider the new challenges that this technology poses for equality and representation of and impact on diverse individuals and groups. Third, that in the face of any discrimination, states must guarantee access to an effective judicial remedy (The Toronto Declaration 2018).
In a very similar sense utters the Declaration on Ethics and Data Protection in Artificial Intelligence adopted during the 2018 International Conference of Institutions dedicated to Data Protection and Privacy (ICDPPC) with two additional premises or principles, transparency and responsibility (Declaration on Ethics and Data Protection in Artificial Intelligence 2018).
Also in 2018, The Public Voice organization approved the Universal Guidelines for Artificial Intelligence, a document endorsed by 50 scientific organizations and over 200 experts from around the world. The document outlines 12 principles which must be incorporated into ethical standards, to be adopted in national legislation and international agreements, and to be integrated into the design of AI systems. The principles include the (1) Right to Transparency, (2) Right to Human Determination, (3) Identification Obligation, (4) Fairness Obligation, (5) Assessment and Accountability Obligation, (6) Accuracy, Reliability, and Validity Obligations, (8) Public Safety Obligation, (9) Cybersecurity, (10) Prohibition on Secret Profiling, (11) Prohibition on Unitary Scoring, and the (12) Termination Obligation.
The European Union (EU) has also sketched several documents to address the ethical and legal aspects of AI systems. For example, on April 8, 2019, the EU Commission adopted the Ethics Guidelines for Trustworthy Artificial Intelligence establishing 7 key requirements that AI systems should meet in order to be deemed trustworthy: (1) Human Agency and Oversight, (2) Technical Robustness and Safety, (3) Privacy and Data Governance, (4) Transparency, (5) Diversity, Non-Discrimination and Fairness, (6) Societal and Environmental Well-Being, and (7) Accountability. According to this text, the trustworthiness of AI underlies on three components that must be satisfied throughout the entire life cycle of the system; i) lawfulrespecting all applicable laws and regulations, ii) ethical -respecting ethical principles and values, iii) robust -both from a technical perspective while considering its social environment.
The Council of Europe has also approved declarations along these lines. The Guidelines on Artificial Intelligence and Data Protection shape a series of general guidelines, and instructions targeted for developers, manufacturers and service providers but also recommendations for legislators and policy makers. The Declaration of the Committee of Ministers on the manipulative capacities of algorithmic processes of February 2019, highlights that, "sub-conscious and personalised levels of algorithmic persuasion may have significant effects on the cognitive autonomy of individuals and their right to form opinions and take independent decisions. These effects remain under-explored but cannot be underestimated. Not only may they weaken the exercise and enjoyment of individual human rights, but they may lead to the corrosion of the very foundation of the Council of Europe. Its central pillars of human rights, democracy and the rule of law are grounded on the fundamental belief in the equality and dignity of all humans as independent moral agents." Finally, the Recommendation Unboxing Artificial Intelligence: 10 Steps to Protect Human Rights of May 2019 underlies the need to carry out impact assessments on human rights in relation to AI systems. As explained, there is already an incipient regulatory approach regarding the impact of AI systems on human rights, although for the moment, these are only non-binding guidelines or principles of interpretation.
In the post COVID-19 era, large-scale online information exchange about medical conditions and potential remedy alternatives is a rather novel phenomenon and therefore hardly regulated. The downsides of crowdsourcing information about health online are emerging risks and unknown legal boundaries as well as potential liability concerns. Online crowdsourcing of information also opens gates to critical biases against those publicizing their health status online, as well as a risk of deception and fraud committed to a highly vulnerable population. International big data exchange could set standards for future pandemic prevention but should also provide big data privacy protection and legal anti-discrimination means against misuse of sensitive information -such as leading towards stigmatization -of vulnerable patients' exposure of their disability and conditions (Cirruzzo 2021; The White House of the United States 2021).
As online sharing of sensitive information opens privacy concerns for a vulnerable and impaired group, the creation of legal and regulatory frameworks to prevent abuse of online forums for marketing purposes at the expense of the well-being of susceptible patients and impaired individuals in physical and emotional pain or debilitated conditions has become a blatant demand of our time. Long-term deliberations and hyperbolic discounting should be integrated into academic and political debates in order to protect individuals when innocently sharing medical information and compassionately seeking or extending non-medically-trained help (Puaschunder 2017a, b;. The anonymous participation in new virtual realities currently also brings along completely new problems such as cyber-crime, hate postings and social censorship by online mobs, which could be particularly harmful to vulnerable patients seeking remedies online. Governments and traditional media have lost control over public opinion in the digital age. Legal protection includes privacy in "big data" and the individual "right to be forgotten" online as well as the dignity of conscientious data protection and online privacy (Mayer-Schönberger 2009). Healthy and informed access to new media needs to address the dilemma between the individual benefit from information exchange online versus the human dignity of privacy on the internet.
On a wider societal scale, the digitalization disruption also brings along novel inequalities (Puaschunder 2020a, b). Inequality in internet connectivity, tech skills and affinity to digitalization leverages AI-human-compatibility as a competitive advantage. Digital online working conditions that make individual living conditions transparent emphasize social hierarchies in our work-related interactions and may further transpire differences in social status in business and educational settings. Taxing the digital economy could create the fiscal space to offset the financial fallout from technological disruption and ensure that education and professional training emphasize the conscientious use of new technologies (Puaschunder 2019a). Taxing internet generated gains could also provide the fiscal space to offset online inequalities in granting access, tools and capabilities for underprivileged segments and COVID long-hauling disabled. All these endeavors could ennoble society's most fascinating innovations with a humane sense for attention to human rights, inequality alleviation and compassionate care.